Patents by Inventor Michael A. Blocksome

Michael A. Blocksome has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150193260
    Abstract: Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer.
    Type: Application
    Filed: January 6, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JAMES E. CAREY, PHILIP J. SANDERS
  • Publication number: 20150193366
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, the sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines that the next in-sequence packet is not in the first FIFO message queue. In response to determining that the next in-sequence packet is not in the first FIFO message queue and that the first FIFO message queue exceeds a threshold capacity, the DMA controller copies the contents of the first FIFO message queue into the second FIFO message queue.
    Type: Application
    Filed: May 14, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventor: MICHAEL A. BLOCKSOME
  • Publication number: 20150193283
    Abstract: Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer.
    Type: Application
    Filed: May 28, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JAMES E. CAREY, PHILIP J. SANDERS
  • Publication number: 20150193364
    Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, a sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines whether the head pointer is pointing to the next in-sequence packet. If the head pointer is pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the head pointer. If the head pointer is not pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the next in-sequence packet.
    Type: Application
    Filed: January 7, 2014
    Publication date: July 9, 2015
    Applicant: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 9075759
    Abstract: Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data trans
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: July 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Amith R. Mamidala
  • Patent number: 9069631
    Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: June 30, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Amith R. Mamidala
  • Patent number: 9052974
    Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endp
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: June 9, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Amith R. Mamidala
  • Patent number: 9047091
    Abstract: Collective operation protocol selection in a parallel computer that includes compute nodes may be carried out by calling a collective operation with operating parameters; selecting a protocol for executing the operation and executing the operation with the selected protocol. Selecting a protocol includes: iteratively, until a prospective protocol meets predetermined performance criteria: providing, to a protocol performance function for the prospective protocol, the operating parameters; determining whether the prospective protocol meets predefined performance criteria by evaluating a predefined performance fit equation, calculating a measure of performance of the protocol for the operating parameters; determining that the prospective protocol meets predetermined performance criteria and selecting the protocol for executing the operation only if the calculated measure of performance is greater than a predefined minimum performance threshold.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: June 2, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 9047150
    Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endp
    Type: Grant
    Filed: November 15, 2012
    Date of Patent: June 2, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Amith R. Mamidala
  • Patent number: 9027033
    Abstract: Administering message acknowledgements in a parallel computer that includes compute nodes, with each compute node including a processor and a messaging accelerator, includes: storing in a list, by a processor of a compute node, a message descriptor describing a message and an acknowledgement request descriptor describing a request for an acknowledgement of receipt of the message; processing, by a messaging accelerator of the compute node, the list, including transmitting, to a target compute node, the message described by the message descriptor and transmitting, to the target compute node, the request described by the acknowledgement request descriptor; receiving, by the messaging accelerator from the target compute node, an acknowledgement of receipt of the message, including notifying the processor of receipt of the acknowledgement; and removing, by the processor from the list, the message descriptor and the acknowledgment request descriptor.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: May 5, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 8990450
    Abstract: Managing a direct memory access (‘DMA’) injection first-in-first-out (‘FIFO’) messaging queue in a parallel computer, including: inserting, by a messaging unit management module, a DMA message descriptor into the injection FIFO messaging queue; determining, by the messaging unit management module, the number of extra slots in an immediate messaging queue required to store DMA message data associated with the DMA message descriptor; and responsive to determining that the number of extra slots in the immediate message queue required to store the DMA message data is greater than one, inserting, by the messaging unit management module, a number of DMA dummy message descriptors into the injection FIFO messaging queue, wherein the number of DMA dummy message descriptors is at least as many as the number of extra slots in the immediate messaging queue that are required to store the DMA message data.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Todd A. Inglett, Patrick J. McCarthy, Joseph D. Ratterman, Brian E. Smith
  • Publication number: 20150063100
    Abstract: Data communications may be carried out in a distributed computing environment that includes computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). Such data communications may be carried out by: issuing, by a sender to a receiver, an eager SEND data communications instruction to transfer SEND data, the instruction including information describing data location at the sender and data size; transmitting, by the sender to the receiver, the SEND data as eager data packets; discarding, by the receiver in dependence upon data flow conditions, eager data packets as they are received from the sender; and transferring, in dependence upon the data flow conditions, by the receiver from the sender's data location to a receive buffer by remote direct memory access (“RDMA”), the SEND data.
    Type: Application
    Filed: August 27, 2013
    Publication date: March 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
  • Publication number: 20150067067
    Abstract: Data communications may be carried out in a distributed computing environment that includes a plurality of computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). In such an environment, data communications may include: issuing, by a sender to a receiver, an eager SEND data communications instruction to transfer SEND data, the instruction including information describing a location and size of a send buffer in which the SEND data is stored; transmitting, by the sender to the receiver, the SEND data as eager data packets; issuing, by the receiver to the sender in dependence upon data flow conditions, a STOP instruction, the STOP instruction including an order to stop transmitting the eager data packets; and transferring the SEND data by the receiver from the sender's data location to a receive buffer by remote direct memory access (“RDMA”).
    Type: Application
    Filed: August 27, 2013
    Publication date: March 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
  • Publication number: 20150067068
    Abstract: Data communications may be carried out in a distributed computing environment that includes a plurality of computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). In distributed computing environment, data communications may include: receiving in the AMI from an application an eager SEND instruction that describes the location and size of send data in an application SEND buffer; copying by the AMI the send data from the application SEND buffer to a temporary AMI buffer; advising the application of completion of the SEND instruction before sending the SEND data to the receiver; and after advising the application of completion of the SEND instruction, sending the SEND data by the sender to the receiver.
    Type: Application
    Filed: August 27, 2013
    Publication date: March 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
  • Publication number: 20150058657
    Abstract: Methods, apparatuses, and computer program products for adaptive clock throttling for event processing are provided. Embodiments include an event processing system receiving a plurality of events from one or more components of the distributed processing system. Embodiments also include the event processing system determining that an arrival attribute of the plurality of events exceeds an arrival threshold. Embodiments also include the event processing system, adjusting, in response to determining that the arrival attribute of the plurality of events exceeds the arrival threshold, a clock speed of at least one of the event processing system and a component of the distributed processing system.
    Type: Application
    Filed: August 22, 2013
    Publication date: February 26, 2015
    Applicant: International Business Machines Corporation
    Inventors: Charles J. ARCHER, Michael A. BLOCKSOME, James E. CAREY, Philip J. SANDERS
  • Patent number: 8966224
    Abstract: A parallel computer that includes compute nodes having computer processors and a CAU (Collectives Acceleration Unit) that couples processors to one another for data communications. In embodiments of the present invention, deterministic reduction operation include: organizing processors of the parallel computer and a CAU into a branched tree topology, where the CAU is a root of the branched tree topology and the processors are children of the root CAU; establishing a receive buffer that includes receive elements associated with processors and configured to store the associated processor's contribution data; receiving, in any order from the processors, each processor's contribution data; tracking receipt of each processor's contribution data; and reducing, the contribution data in a predefined order, only after receipt of contribution data from all processors in the branched tree topology.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 8959172
    Abstract: Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.
    Type: Grant
    Filed: July 27, 2007
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventor: Michael A. Blocksome
  • Patent number: 8949453
    Abstract: Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (‘RTS’) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.
    Type: Grant
    Filed: November 30, 2010
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 8949577
    Abstract: A parallel computer that includes compute nodes having computer processors and a CAU (Collectives Acceleration Unit) that couples processors to one another for data communications. In embodiments of the present invention, deterministic reduction operation include: organizing processors of the parallel computer and a CAU into a branched tree topology, where the CAU is a root of the branched tree topology and the processors are children of the root CAU; establishing a receive buffer that includes receive elements associated with processors and configured to store the associated processor's contribution data; receiving, in any order from the processors, each processor's contribution data; tracking receipt of each processor's contribution data; and reducing, the contribution data in a predefined order, only after receipt of contribution data from all processors in the branched tree topology.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
  • Patent number: 8930956
    Abstract: Methods, apparatuses, and computer program products for utilizing a kernel administration hardware thread of a multi-threaded, multi-core compute node of a parallel computer are provided. Embodiments include a kernel assigning a memory space of a hardware thread of an application processing core to a kernel administration hardware thread of a kernel processing core. A kernel administration hardware thread is configured to advance the hardware thread to a next memory space associated with the hardware thread in response to the assignment of the kernel administration hardware thread to the memory space of the hardware thread. Embodiments also include the kernel administration hardware thread executing an instruction within the assigned memory space.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Todd A. Inglett, Patrick J. McCarthy, Joseph D. Ratterman, Brian E. Smith