Patents by Inventor Michael Woodacre

Michael Woodacre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11573898
    Abstract: A node controller is provided to include a first interface to interface with one or more processors, a second interface including a plurality of ports to interface with node controllers within a base node and other nodes in the cache-coherent interconnect network. The node controller can further include a third interface to interface with a first plurality of memory devices and a cache coherence management logic. The cache coherence management logic can maintain, based on a first circuitry, hardware-managed cache coherency in the cache-coherent interconnect network. The cache coherence management logic can further facilitate, based on a second circuitry, software-managed cache coherency in the cache-coherent interconnect network.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: February 7, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Randy Passint, Paul Frank, Russell L. Nicol, Thomas McGee, Michael Woodacre
  • Publication number: 20220050780
    Abstract: A node controller is provided to include a first interface to interface with one or more processors, a second interface including a plurality of ports to interface with node controllers within a base node and other nodes in the cache-coherent interconnect network. The node controller can further include a third interface to interface with a first plurality of memory devices and a cache coherence management logic. The cache coherence management logic can maintain, based on a first circuitry, hardware-managed cache coherency in the cache-coherent interconnect network. The cache coherence management logic can further facilitate, based on a second circuitry, software-managed cache coherency in the cache-coherent interconnect network.
    Type: Application
    Filed: August 17, 2020
    Publication date: February 17, 2022
    Inventors: Randy Passint, Paul Frank, Russell L. Nicol, Thomas McGee, Michael Woodacre
  • Patent number: 11029847
    Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: June 8, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
  • Patent number: 10795782
    Abstract: Example implementations relate to an apparatus to support providing a computing service to a client including transferring control between a primary data processing system and a secondary data processing system in response to an event; the primary data processing system comprising a processor and associated memory and the secondary data processing system comprising a processor and associated memory; the apparatus comprising: circuitry to identify restoration data; the restoration data comprising at least data associated with at least one predetermined type of memory operation of the memory associated with the primary data processing system, and circuitry to output any identified restoration data for storage in the memory associated with the processor of the secondary data processing system.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: October 6, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Dejan S. Milojicic, Keith Packard, Michael Woodacre, Andrew R. Wheeler
  • Patent number: 10521260
    Abstract: A high performance computing (HPC) system has an architecture that separates data paths used by compute nodes exchanging computational data from the data paths used by compute nodes to obtain computational work units and save completed computations. The system enables an improved method of saving checkpoint data, and an improved method of using an analysis of the saved data to assign particular computational work units to particular compute nodes. The system includes a compute fabric and compute nodes that cooperatively perform a computation by mutual communication using the compute fabric. The system also includes a local data fabric that is coupled to the compute nodes, a memory, and a data node. The data node is configured to retrieve data for the computation from an external bulk data storage, and to store its work units in the memory for access by the compute nodes.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: December 31, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Steven J. Dean, Michael Woodacre, Randal S. Passint, Eric C. Fromm, Thomas E. McGee, Michael E. Malewicki, Kirill Malkin
  • Publication number: 20190303249
    Abstract: Example implementations relate to an apparatus to support providing a computing service to a client including transferring control between a primary data processing system and a secondary data processing system in response to an event; the primary data processing system comprising a processor and associated memory and the secondary data processing system comprising a processor and associated memory; the apparatus comprising: circuitry to identify restoration data; the restoration data comprising at least data associated with at least one predetermined type of memory operation of the memory associated with the primary data processing system, and circuitry to output any identified restoration data for storage in the memory associated with the processor of the secondary data processing system.
    Type: Application
    Filed: April 2, 2018
    Publication date: October 3, 2019
    Inventors: Dejan S. Milojicic, Keith Packard, Michael Woodacre, Andrew R. Wheeler
  • Patent number: 10404800
    Abstract: An apparatus and method exchange data between two nodes of a high performance computing (HPC) system using a data communication link. The apparatus has one or more processing cores, RDMA engines, cache coherence engines, and multiplexers. The multiplexers may be programmed by a user application, for example through an API, to selectively couple either the RDMA engines, cache coherence engines, or a mix of these to the data communication link. Bulk data transfer to the nodes of the HPC system may be performed using paged RDMA during initialization. Then, during computation proper, random access to remote data may be performed using a coherence protocol (e.g. MESI) that operates on much smaller cache lines.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: September 3, 2019
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Michael Woodacre, Randal S. Passint
  • Publication number: 20180018196
    Abstract: A high performance computing (HPC) system has an architecture that separates data paths used by compute nodes exchanging computational data from the data paths used by compute nodes to obtain computational work units and save completed computations. The system enables an improved method of saving checkpoint data, and an improved method of using an analysis of the saved data to assign particular computational work units to particular compute nodes. The system includes a compute fabric and compute nodes that cooperatively perform a computation by mutual communication using the compute fabric. The system also includes a local data fabric that is coupled to the compute nodes, a memory, and a data node. The data node is configured to retrieve data for the computation from an external bulk data storage, and to store its work units in the memory for access by the compute nodes.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 18, 2018
    Inventors: Steven J. Dean, Michael Woodacre, Randal S. Passint, Eric C. Fromm, Thomas E. McGee, Michael E. Malewicki, Kirill Malkin
  • Publication number: 20180020054
    Abstract: An apparatus and method exchange data between two nodes of a high performance computing (HPC) system using a data communication link. The apparatus has one or more processing cores, RDMA engines, cache coherence engines, and multiplexers. The multiplexers may be programmed by a user application, for example through an API, to selectively couple either the RDMA engines, cache coherence engines, or a mix of these to the data communication link. Bulk data transfer to the nodes of the HPC system may be performed using paged RDMA during initialization. Then, during computation proper, random access to remote data may be performed using a coherence protocol (e.g. MESI) that operates on much smaller cache lines.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 18, 2018
    Inventors: Michael Woodacre, Randal S. Passint
  • Publication number: 20170139607
    Abstract: In high performance computing, the potential compute power in a data center will scale to and beyond a billion-billion calculations per second (“Exascale” computing levels). Limitations caused by hierarchical memory architectures where data is temporarily stored in slower or less available memories will increasingly limit high performance computing systems from approaching their maximum potential processing capabilities. Furthermore, time spent and power consumed copying data into and out of a slower tier memory will increase costs associated with high performance computing at an accelerating rate. New technologies, such as the novel Zero Copy Architecture disclosed herein, where each compute node writes locally for performance, yet can quickly access data globally with low latency will be required. The result is the ability to perform burst buffer operations and in situ analytics, visualization and computational steering without the need for a data copy or movement.
    Type: Application
    Filed: November 16, 2016
    Publication date: May 18, 2017
    Inventors: Kirill Malkin, Steve Dean, Michael Woodacre, Eng Lim Goh
  • Patent number: 7613882
    Abstract: An example embodiment of the present invention provides processes relating to a cache coherence protocol for distributed shared memory. In one process, a DSM-management chip receives a request to modify a block of memory stored on a node that includes the chip and one or more CPUs, which request is marked for fast invalidation and comes from one of the CPUs. The DSM-management chip sends probes, also marked for fast invalidation, to DSM-management chips on other nodes where the block of memory is cached and responds to the original probe, allowing the requested modification to proceed without waiting for responses from the probes. Then the DSM-management chip delays for a pre-determined time period before incrementing the value of a serial counter which operates in connection with another serial counter to prevent data from leaving the node's CPUs over the network until responses to the probes have been received.
    Type: Grant
    Filed: January 29, 2007
    Date of Patent: November 3, 2009
    Assignee: 3 Leaf Systems
    Inventors: Isam Akkawi, Michael Woodacre, Bryan Chin, Krishnan Subramani, Najeeb Imran Ansari, Chetana Nagendra Keltcher, Janakiramanan Vaidyanathan