Patents by Inventor Michael Malewicki

Michael Malewicki has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230281127
    Abstract: Example implementations relate to cache coherency protocols as applied to a memory block range. Exclusive ownership of a range of blocks of memory in a default shared state may be tracked by a directory. The directory may be associated with a first processor of a set of processors. When a request is received from a second processor of the set of processors to read one or more blocks of memory absent from the directory, one or more blocks may be transmitted in the default shared state to the second processor. The blocks absent from the directory may not be tracked in the directory.
    Type: Application
    Filed: May 11, 2023
    Publication date: September 7, 2023
    Inventors: Michael Malewicki, Thomas McGee, Michael S. Woodacre
  • Patent number: 11714755
    Abstract: One embodiment can provide a node controller in a multiprocessor system. The node controller can include a processor interface to interface with a processor, a memory interface to interface with a fabric-attached memory, a node-controller interface to interface with a remote node controller, and a cache-coherence logic to operate in a first mode or a second mode. The cache-coherence logic manages cache coherence for a local memory of the processor coupled to the processor interface when operating in the first mode, and the cache-coherence logic manages cache coherence for the fabric-attached memory coupled to the memory interface when operating in the second mode.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: August 1, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Derek Schumacher, Randy Passint, Thomas McGee, Michael Malewicki, Michael S. Woodacre
  • Patent number: 11687459
    Abstract: Example implementations relate to cache coherency protocols as applied to a memory block range. Exclusive ownership of a range of blocks of memory in a default shared state may be tracked by a directory. The directory may be associated with a first processor of a set of processors. When a request is received from a second processor of the set of processors to read one or more blocks of memory absent from the directory, one or more blocks may be transmitted in the default shared state to the second processor. The blocks absent from the directory may not be tracked in the directory.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: June 27, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Michael Malewicki, Thomas McGee, Michael S. Woodacre
  • Patent number: 11586541
    Abstract: One embodiment can provide a node controller in a multiprocessor system. The node controller can include a processor interface to interface with a processor, a memory interface to interface with a fabric-attached memory, a node-controller interface to interface with a remote node controller, and a cache-coherence logic to operate in a first mode or a second mode. The cache-coherence logic manages cache coherence for a local memory of the processor coupled to the processor interface when operating in the first mode, and the cache-coherence logic manages cache coherence for the fabric-attached memory coupled to the memory interface when operating in the second mode.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: February 21, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Derek Schumacher, Randy Passint, Thomas McGee, Michael Malewicki, Michael S. Woodacre
  • Patent number: 11556471
    Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: January 17, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Michael S. Woodacre, Thomas McGee, Michael Malewicki
  • Publication number: 20220334971
    Abstract: Example implementations relate to cache coherency protocols as applied to a memory block range. Exclusive ownership of a range of blocks of memory in a default shared state may be tracked by a directory. The directory may be associated with a first processor of a set of processors. When a request is received from a second processor of the set of processors to read one or more blocks of memory absent from the directory, one or more blocks may be transmitted in the default shared state to the second processor. The blocks absent from the directory may not be tracked in the directory.
    Type: Application
    Filed: April 14, 2021
    Publication date: October 20, 2022
    Inventors: Michael MALEWICKI, Thomas MCGEE, Michael S. WOODACRE
  • Patent number: 11314637
    Abstract: To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 26, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Thomas McGee, Michael Malewicki
  • Publication number: 20220035742
    Abstract: One embodiment can provide a node controller in a multiprocessor system. The node controller can include a processor interface to interface with a processor, a memory interface to interface with a fabric-attached memory, a node-controller interface to interface with a remote node controller, and a cache-coherence logic to operate in a first mode or a second mode. The cache-coherence logic manages cache coherence for a local memory of the processor coupled to the processor interface when operating in the first mode, and the cache-coherence logic manages cache coherence for the fabric-attached memory coupled to the memory interface when operating in the second mode.
    Type: Application
    Filed: July 31, 2020
    Publication date: February 3, 2022
    Inventors: Derek Schumacher, Randy Passint, Thomas McGee, Michael Malewicki, Michael S. Woodacre
  • Publication number: 20210374050
    Abstract: To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Inventors: FRANK R. DROPPS, THOMAS MCGEE, MICHAEL MALEWICKI
  • Patent number: 10970213
    Abstract: An apparatus, system, and method of enforcing cache coherency in a multiprocessor shared memory system are disclosed. A request is received from a node controller, to process a cache coherent operation on a memory block in a shared memory. Based on the information included in the request, a determination is made as to whether the request was transmitted from a processor that is remote relative to the memory that includes the memory block referenced in the request. If the request is from a remote processor, a hardware-based cache coherency of the system is disabled, and request is processed according to software-based cache coherency protocols and mechanisms. A coherent read request may be translated to a non-coherent request, such as an immediate read request, which does not trigger tracking or storing state and ownership information of the requested memory block, or trigger communications with processors other than those involved with request.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 6, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Thomas McGee, Michael S. Woodacre, Michael Malewicki
  • Publication number: 20200349075
    Abstract: In exemplary aspects of enforcing cache coherency, a request is received from a node controller, to process a cache coherent operation on a memory block in a shared memory. Based on the information included in the request, a determination is made as to whether the request was transmitted from a processor that is remote relative to the memory that includes the memory block referenced in the request. If the request is from a remote processor, the hardware-based cache coherency of the system is disabled. Instead, the request is processed according to software-based cache coherency mechanisms. A response to the request is transmitted to the requestor.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Thomas McGee, Michael S. Woodacre, Michael Malewicki
  • Publication number: 20200349076
    Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Frank R. Dropps, Michael S. Woodacre, Thomas McGee, Michael Malewicki
  • Patent number: 10628365
    Abstract: Multi-node, multi-socket computer systems and methods provide packet tunneling between processor nodes without going through a node controller link. On receiving a packet, the destination node identifier (NID) is examined, and if it is not same as the source socket, then the packet request address is examined. If it is determined that the packet is not for a remote connected socket, then the packet's destination NID and source socket NID are replaced along with recalculated data protection information. The modified packet is then sent to the destination socket over another processor interconnect path.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: April 21, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Michael Anderson, Michael Malewicki
  • Publication number: 20190155779
    Abstract: Multi-node, multi-socket computer systems and methods provide packet tunneling between processor nodes without going through a node controller link. On receiving a packet, the destination node identifier (NID) is examined, and if it is not same as the source socket, then the packet request address is examined. If it is determined that the packet is not for a remote connected socket, then the packet's destination NID and source socket NID are replaced along with recalculated data protection information. The modified packet is then sent to the destination socket over another processor interconnect path.
    Type: Application
    Filed: November 17, 2017
    Publication date: May 23, 2019
    Inventors: Frank R. Dropps, Michael Anderson, Michael Malewicki