Patents by Inventor Michael Blocksome
Michael Blocksome has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20120137294Abstract: Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (‘RTS’) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.Type: ApplicationFiled: November 30, 2010Publication date: May 31, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120124249Abstract: Data communications with reduced latency, including: writing, by a producer, a descriptor and message data into at least two descriptor slots of a descriptor buffer, the descriptor buffer comprising allocated computer memory segmented into descriptor slots, each descriptor slot having a fixed size, the descriptor buffer having a header pointer that identifies a next descriptor slot to be processed by a DMA controller, the descriptor buffer having a tail pointer that identifies a descriptor slot for entry of a next descriptor in the descriptor buffer; recording, by the producer, in the descriptor a value signifying that message data has been written into descriptor slots; and setting, by the producer, in dependence upon the recorded value, a tail pointer to point to a next open descriptor slot.Type: ApplicationFiled: November 16, 2010Publication date: May 17, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blocksome, Jeffrey J. Parker
-
Publication number: 20120117361Abstract: Processing data communications events in a parallel active messaging interface (‘PAMI’) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.Type: ApplicationFiled: November 10, 2010Publication date: May 10, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120117138Abstract: Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transType: ApplicationFiled: November 5, 2010Publication date: May 10, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blocksome, Amith R. Mamidala
-
Publication number: 20120117137Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpType: ApplicationFiled: November 5, 2010Publication date: May 10, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blocksome, Amith R. Mamidala
-
Publication number: 20120117281Abstract: Fencing direct memory access (‘DMA’) data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two enType: ApplicationFiled: November 5, 2010Publication date: May 10, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blocksome, Amith R. Mamidala
-
Publication number: 20120117211Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.Type: ApplicationFiled: November 5, 2010Publication date: May 10, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blocksome, Amith R. Mamidala
-
Patent number: 8161307Abstract: Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.Type: GrantFiled: October 20, 2011Date of Patent: April 17, 2012Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Amanda E. Peters, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120079165Abstract: Paging memory from random access memory (‘RAM’) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.Type: ApplicationFiled: September 28, 2010Publication date: March 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Archer, Michael A. Blocksome, Todd A. Inglett, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120079035Abstract: Administering truncated receive functions in a parallel messaging interface (‘PMI’) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.Type: ApplicationFiled: September 28, 2010Publication date: March 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120079133Abstract: Routing data communications packets in a parallel computer that includes compute nodes organized for collective operations, each compute node including an operating system kernel and a system-level messaging module that is a module of automated computing machinery that exposes a messaging interface to applications, each compute node including a routing table that specifies, for each of a multiplicity of route identifiers, a data communications path through the compute node, including: receiving in a compute node a data communications packet that includes a route identifier value; retrieving from the routing table a specification of a data communications path through the compute node; and routing, by the compute node, the data communications packet according to the data communications path identified by the compute node's routing table entry for the data communications packet's route identifier value.Type: ApplicationFiled: September 28, 2010Publication date: March 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, TODD A. INGLETT, JOSEPH D. RATTERMAN, BRIAN E. SMITH
-
Patent number: 8140373Abstract: A method, system and article of manufacture for workflow processing and, more particularly, for managing creation and execution of data driven dynamic workflows. One embodiment provides a computer-implemented method for managing execution of workflow instances. The method comprises providing a parent process template and providing a child process template. The child process template is configured to implement an arbitrary number of workflow operations for a given workflow instance, and the parent process template is configured to instantiate child processes on the basis of the child process template to implement a desired workflow. The method further comprises receiving a workflow configuration and instantiating an instance of the workflow on the basis of the workflow configuration. The instantiating comprises instantiating a parent process on the basis of the parent process template and instantiating, by the parent process template, one or more child processes on the basis of the child process template.Type: GrantFiled: April 7, 2005Date of Patent: March 20, 2012Assignee: International Business Machines CorporationInventors: Melissa Aron, Michael A. Blocksome, David G. Herbeck, Todd E. Johnson
-
Patent number: 8140704Abstract: Methods, apparatus, and products are disclosed for pacing network traffic among a plurality of compute nodes connected using a data communications network. The network has a plurality of network regions, and the plurality of compute nodes are distributed among these network regions. Pacing network traffic among a plurality of compute nodes connected using a data communications network includes: identifying, by a compute node for each region of the network, a roundtrip time delay for communicating with at least one of the compute nodes in that region; determining, by the compute node for each region, a pacing algorithm for that region in dependence upon the roundtrip time delay for that region; and transmitting, by the compute node, network packets to at least one of the compute nodes in at least one of the network regions in dependence upon the pacing algorithm for that region.Type: GrantFiled: July 2, 2008Date of Patent: March 20, 2012Assignee: International Busniess Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120066284Abstract: Send-side matching of data communications messages in a distributed computing system comprising a plurality of compute nodes organized for collective operations, including: issuing by a receiving node to source nodes a receive message that specifies receipt of a single message to be sent from any source node, the receive message including message matching information, a specification of a hardware-level mutual exclusion device, and an identification of a receive buffer; matching by two or more of the source nodes the receive message with pending send messages in the two or more source nodes; operating by one of the source nodes having a matching send message the mutual exclusion device, excluding messages from other source nodes with matching send messages and identifying to the receiving node the source node operating the mutual exclusion device; and sending to the receiving node from the source node operating the mutual exclusion device a matched pending message.Type: ApplicationFiled: September 14, 2010Publication date: March 15, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: CHARLES J. ARCHER, MICHAEL A. BLOCKSOME, JOSEPH D. RATTERMAN, BRIAN E. SMITH
-
Publication number: 20120066310Abstract: Systems, methods and articles of manufacture are disclosed for performing a vector collective operation on a parallel computing system that includes multiple compute nodes and a network connecting the compute nodes that includes an ALU. A collective operation may be performed to determine displacements for the vector collective operation. Descriptors for the vector collective operation may be generated based on the displacements. The vector collective operation may then be performed using the descriptors.Type: ApplicationFiled: September 15, 2010Publication date: March 15, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Publication number: 20120036384Abstract: Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.Type: ApplicationFiled: October 20, 2011Publication date: February 9, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles J. Archer, Michael A. Blocksome, Amanda E. Peters, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 8112559Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.Type: GrantFiled: September 30, 2008Date of Patent: February 7, 2012Assignee: International Business Machines CorporationInventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
-
Patent number: 8108467Abstract: Methods, apparatus, and products are disclosed for load balanced data processing performed on an application message transmitted between compute nodes of a parallel computer that include: identifying, by an origin compute node, an application message for transmission to a target compute node, the message to be processed by a data processing operation; determining, by the origin compute node, origin sub-operations used to carry out a portion of the data processing operation on the origin compute node; determining, by the origin compute node, target sub-operations used to carry out a remaining portion of the data processing operation on the target compute node; processing, by the origin compute node, the message using the origin sub-operations; and transmitting, by the origin compute node, the processed message to the target compute node for processing using the target sub-operations.Type: GrantFiled: June 26, 2008Date of Patent: January 31, 2012Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome
-
Patent number: 8095811Abstract: Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.Type: GrantFiled: May 29, 2008Date of Patent: January 10, 2012Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Amanda A. Peters, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 8082424Abstract: Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.Type: GrantFiled: August 1, 2007Date of Patent: December 20, 2011Assignee: International Business Machines CorporationInventor: Michael A. Blocksome