Patents by Inventor Michael A. Blocksome

Michael A. Blocksome has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150312108
    Abstract: Administering VMs in a distributed computing environment that includes hosts that execute a VMM, with each VMM supporting execution of one or more VMs, includes: assigning the VMMs to a logical tree topology with one as a root; and executing, by the VMMs of the tree topology, a reduce operation, including: sending, by the root VMM to each of other VMMs of the tree topology, a request for an instance of a particular VM; pausing, by each of the other VMMs, the requested instance of the particular VM; providing, by each of the other VMMs to the root VMM in response to the root VMM's request, the requested instance of the particular VM; and identifying, by the root VMM, differences among the requested instances of the particular VM including, performing a bitwise XOR operation amongst the instances of the particular VM.
    Type: Application
    Filed: April 28, 2014
    Publication date: October 29, 2015
    Applicant: International Business Machines Corporation
    Inventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JAMES E. CAREY, PHILIP J. SANDERS
  • Publication number: 20150312109
    Abstract: In a distributed computing environment that includes hosts that execute a VMM, with each VMM supporting execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a scatter operation, including: pausing, by the root VMM one or more executing VMs; storing, by the root VMM in a buffer, a plurality of VMs to scatter amongst the other VMMs of the tree topology; and sending, by the root VMM, to each of the other VMMs of the tree topology a different one of the VMs stored in the buffer.
    Type: Application
    Filed: April 28, 2014
    Publication date: October 29, 2015
    Applicant: International Business Machines Corporation
    Inventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JAMES E. CAREY, PHILIP J. SANDERS
  • Publication number: 20150309824
    Abstract: In a distributed computing environment that includes hosts that execute a VMM, where each VMM supports execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a broadcast operation, including: pausing, by the root VMM, execution of one or more VMs supported by the root VMM; sending, by the root VMM, to other VMMs in the tree topology, a message indicating a pending transfer of the paused VMs; and transferring the paused VMs from the root VMM to the other VMMs.
    Type: Application
    Filed: April 25, 2014
    Publication date: October 29, 2015
    Applicant: International Business Machines Corporation
    Inventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JAMES E. CAREY, PHILIP J. SANDERS
  • Publication number: 20150309817
    Abstract: In a distributed computing environment that includes hosts that execute a VMM, where each VMM supports execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a broadcast operation, including: pausing, by the root VMM, execution of one or more VMs supported by the root VMM; sending, by the root VMM, to other VMMs in the tree topology, a message indicating a pending transfer of the paused VMs; and transferring the paused VMs from the root VMM to the other VMMs.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 29, 2015
    Applicant: International Business Machines Corporation
    Inventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JAMES E. CAREY, PHILIP J. SANDERS
  • Patent number: 9170966
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, a sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines whether the head pointer is pointing to the next in-sequence packet. If the head pointer is pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the head pointer. If the head pointer is not pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the next in-sequence packet.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: October 27, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 9170865
    Abstract: Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: October 27, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
  • Patent number: 9164792
    Abstract: Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: October 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
  • Patent number: 9158718
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, a sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines whether the head pointer is pointing to the next in-sequence packet. If the head pointer is pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the head pointer. If the head pointer is not pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the next in-sequence packet.
    Type: Grant
    Filed: January 7, 2014
    Date of Patent: October 13, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 9152590
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, the sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines that the next in-sequence packet is not in the first FIFO message queue. In response to determining that the next in-sequence packet is not in the first FIFO message queue and that the first FIFO message queue exceeds a threshold capacity, the DMA controller copies the contents of the first FIFO message queue into the second FIFO message queue.
    Type: Grant
    Filed: May 14, 2014
    Date of Patent: October 6, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 9146886
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, the sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines that the next in-sequence packet is not in the first FIFO message queue. In response to determining that the next in-sequence packet is not in the first FIFO message queue and that the first FIFO message queue exceeds a threshold capacity, the DMA controller copies the contents of the first FIFO message queue into the second FIFO message queue.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: September 29, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 9122514
    Abstract: Administering message acknowledgements in a parallel computer that includes compute nodes, with each compute node including a processor and a messaging accelerator, includes: storing in a list, by a processor of a compute node, a message descriptor describing a message and an acknowledgement request descriptor describing a request for an acknowledgement of receipt of the message; processing, by a messaging accelerator of the compute node, the list, including transmitting, to a target compute node, the message described by the message descriptor and transmitting, to the target compute node, the request described by the acknowledgement request descriptor; receiving, by the messaging accelerator from the target compute node, an acknowledgement of receipt of the message, including notifying the processor of receipt of the acknowledgement; and removing, by the processor from the list, the message descriptor and the acknowledgment request descriptor.
    Type: Grant
    Filed: January 7, 2014
    Date of Patent: September 1, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 9116750
    Abstract: Methods, apparatuses, and computer program products for optimizing collective communications within a parallel computer comprising a plurality of hardware threads for executing software threads of a parallel application are provided. Embodiments include a processor of a parallel computer determining for each software thread, an affinity of the software thread to a particular hardware thread. Each affinity indicates an assignment of a software thread to a particular hardware thread. The processor also generates one or more affinity domains based on the affinities of the software threads. Embodiments also include a processor generating, for each affinity domain, a topology of the affinity domain based on the affinities of the software threads to the hardware threads. According to embodiments of the present application, a processor also performs, based on the generated topologies of the affinity domains, a collective operation on one or more software threads.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: August 25, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 9104512
    Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.
    Type: Grant
    Filed: November 15, 2012
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Amith R. Mamidala
  • Patent number: 9081501
    Abstract: A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC).
    Type: Grant
    Filed: January 10, 2011
    Date of Patent: July 14, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sameh Asaad, Ralph E. Bellofatto, Michael A. Blocksome, Matthias A. Blumrich, Peter Boyle, Jose R. Brunheroto, Dong Chen, Chen-Yong Cher, George L. Chiu, Norman Christ, Paul W. Coteus, Kristan D. Davis, Gabor J. Dozsa, Alexandre E. Eichenberger, Noel A. Eisley, Matthew R. Ellavsky, Kahn C. Evans, Bruce M. Fleischer, Thomas W. Fox, Alan Gara, Mark E. Giampapa, Thomas M. Gooding, Michael K. Gschwind, John A. Gunnels, Shawn A. Hall, Rudolf A. Haring, Philip Heidelberger, Todd A. Inglett, Brant L. Knudson, Gerard V. Kopcsay, Sameer Kumar, Amith R. Mamidala, James A. Marcella, Mark G. Megerian, Douglas R. Miller, Samuel J. Miller, Adam J. Muff, Michael B. Mundy, John K. O'Brien, Kathryn M. O'Brien, Martin Ohmacht, Jeffrey J. Parker, Ruth J. Poole, Joseph D. Ratterman, Valentina Salapura, David L. Satterfield, Robert M. Senger, Brian Smith, Burkhard Steinmacher-Burow, William M. Stockdell, Craig B. Stunkel, Krishnan Sugavanam, Yutaka Sugawara, Todd E. Takken, Barry M. Trager, James L. Van Oosten, Charles D. Wait, Robert E. Walkup, Alfred T. Watson, Robert W. Wisniewski, Peng Wu
  • Patent number: 9081739
    Abstract: Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data trans
    Type: Grant
    Filed: November 16, 2012
    Date of Patent: July 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Amith R. Mamidala
  • Publication number: 20150193365
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, a sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines whether the head pointer is pointing to the next in-sequence packet. If the head pointer is pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the head pointer. If the head pointer is not pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the next in-sequence packet.
    Type: Application
    Filed: May 6, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventor: MICHAEL A. BLOCKSOME
  • Publication number: 20150193261
    Abstract: Administering message acknowledgements in a parallel computer that includes compute nodes, with each compute node including a processor and a messaging accelerator, includes: storing in a list, by a processor of a compute node, a message descriptor describing a message and an acknowledgement request descriptor describing a request for an acknowledgement of receipt of the message; processing, by a messaging accelerator of the compute node, the list, including transmitting, to a target compute node, the message described by the message descriptor and transmitting, to the target compute node, the request described by the acknowledgement request descriptor; receiving, by the messaging accelerator from the target compute node, an acknowledgement of receipt of the message, including notifying the processor of receipt of the acknowledgement; and removing, by the processor from the list, the message descriptor and the acknowledgment request descriptor.
    Type: Application
    Filed: January 7, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Publication number: 20150193361
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, the sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines that the next in-sequence packet is not in the first FIFO message queue. In response to determining that the next in-sequence packet is not in the first FIFO message queue and that the first FIFO message queue exceeds a threshold capacity, the DMA controller copies the contents of the first FIFO message queue into the second FIFO message queue.
    Type: Application
    Filed: January 6, 2014
    Publication date: July 9, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: MICHAEL A. BLOCKSOME
  • Publication number: 20150193282
    Abstract: Administering incomplete data communications messages in a parallel computer that includes a plurality of compute nodes, with each compute node including a processor and a messaging accelerator, includes: transmitting, by a source messaging accelerator to a destination messaging accelerator, a message, including processing a messaging descriptor describing the message and setting, in the message descriptor, a flag indicating the message has been sent; transmitting, by the source messaging accelerator to a destination messaging accelerator responsive to processing an acknowledgement request descriptor corresponding to the message, a request for acknowledgment of receipt of the message; receiving, by the source messaging accelerator from the destination messaging accelerator, a negative acknowledgment (NACK) indicating that the message was not received at the destination messaging accelerator; and clearing, by the source messaging accelerator in the message descriptor, the flag indicating that message has been
    Type: Application
    Filed: May 5, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventor: MICHAEL A. BLOCKSOME
  • Publication number: 20150193281
    Abstract: Administering incomplete data communications messages in a parallel computer that includes a plurality of compute nodes, with each compute node including a processor and a messaging accelerator, includes: transmitting, by a source messaging accelerator to a destination messaging accelerator, a message, including processing a messaging descriptor describing the message and setting, in the message descriptor, a flag indicating the message has been sent; transmitting, by the source messaging accelerator to a destination messaging accelerator responsive to processing an acknowledgement request descriptor corresponding to the message, a request for acknowledgment of receipt of the message; receiving, by the source messaging accelerator from the destination messaging accelerator, a negative acknowledgment (NACK) indicating that the message was not received at the destination messaging accelerator; and clearing, by the source messaging accelerator in the message descriptor, the flag indicating that message has been
    Type: Application
    Filed: January 6, 2014
    Publication date: July 9, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: MICHAEL A. BLOCKSOME