Patents by Inventor Michael A. Blocksome
Michael A. Blocksome has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9503515Abstract: In a distributed computing environment that includes hosts that execute a VMM, where each VMM supports execution of one or more VMs, administering VMs may include: assigning, by a VMM manager, the VMMs of the distributed computing environment to a logical tree topology, including assigning one of the VMMs as a root VMM of the tree topology; and executing, amongst the VMMs of the tree topology, a broadcast operation, including: pausing, by the root VMM, execution of one or more VMs supported by the root VMM; sending, by the root VMM, to other VMMs in the tree topology, a message indicating a pending transfer of the paused VMs; and transferring the paused VMs from the root VMM to the other VMMs.Type: GrantFiled: April 25, 2014Date of Patent: November 22, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
-
Patent number: 9459934Abstract: Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.Type: GrantFiled: November 21, 2012Date of Patent: October 4, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 9459917Abstract: Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.Type: GrantFiled: March 4, 2013Date of Patent: October 4, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Amanda E. Randles, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 9336071Abstract: Administering incomplete data communications messages in a parallel computer that includes a plurality of compute nodes, with each compute node including a processor and a messaging accelerator, includes: transmitting, by a source messaging accelerator to a destination messaging accelerator, a message, including processing a messaging descriptor describing the message and setting, in the message descriptor, a flag indicating the message has been sent; transmitting, by the source messaging accelerator to a destination messaging accelerator responsive to processing an acknowledgement request descriptor corresponding to the message, a request for acknowledgment of receipt of the message; receiving, by the source messaging accelerator from the destination messaging accelerator, a negative acknowledgment (NACK) indicating that the message was not received at the destination messaging accelerator; and clearing, by the source messaging accelerator in the message descriptor, the flag indicating that message has beenType: GrantFiled: January 6, 2014Date of Patent: May 10, 2016Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9317637Abstract: Distributed hardware device simulation, including: identifying a plurality of hardware components of the hardware device; providing software components simulating the functionality of each hardware component, wherein the software components are installed on compute nodes of a distributed processing system; receiving, in at least one of the software components, one or more messages representing an input to the hardware component; simulating the operation of the hardware component with the software component, thereby generating an output of the software component representing the output of the hardware component; and sending, from the software component to at least one other software component, one or more messages representing the output of the hardware component.Type: GrantFiled: January 14, 2011Date of Patent: April 19, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 9286145Abstract: Processing data communications events in a parallel active messaging interface (‘PAMI’) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.Type: GrantFiled: November 8, 2012Date of Patent: March 15, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 9253107Abstract: Data communications may be carried out in a distributed computing environment that includes computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). Such data communications may be carried out by: issuing, by a sender to a receiver, an eager SEND data communications instruction to transfer SEND data, the instruction including information describing data location at the sender and data size; transmitting, by the sender to the receiver, the SEND data as eager data packets; discarding, by the receiver in dependence upon data flow conditions, eager data packets as they are received from the sender; and transferring, in dependence upon the data flow conditions, by the receiver from the sender's data location to a receive buffer by remote direct memory access (“RDMA”), the SEND data.Type: GrantFiled: August 27, 2013Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
-
Patent number: 9250948Abstract: A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.Type: GrantFiled: September 13, 2011Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith, Hanhong Xue
-
Patent number: 9250987Abstract: Administering incomplete data communications messages in a parallel computer that includes a plurality of compute nodes, with each compute node including a processor and a messaging accelerator, includes: transmitting, by a source messaging accelerator to a destination messaging accelerator, a message, including processing a messaging descriptor describing the message and setting, in the message descriptor, a flag indicating the message has been sent; transmitting, by the source messaging accelerator to a destination messaging accelerator responsive to processing an acknowledgement request descriptor corresponding to the message, a request for acknowledgment of receipt of the message; receiving, by the source messaging accelerator from the destination messaging accelerator, a negative acknowledgment (NACK) indicating that the message was not received at the destination messaging accelerator; and clearing, by the source messaging accelerator in the message descriptor, the flag indicating that message has beenType: GrantFiled: May 5, 2014Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9250949Abstract: A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.Type: GrantFiled: November 30, 2012Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith, Hanghong Xue
-
Publication number: 20160011996Abstract: A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.Type: ApplicationFiled: April 30, 2015Publication date: January 14, 2016Inventors: Sameh Asaad, Ralph E. Bellofatto, Michael A. Blocksome, Matthias A. Blumrich, Peter Boyle, Jose R. Brunheroto, Dong Chen, Chen-Yong Cher, George L. Chiu, Norman Christ, Paul W. Coteus, Kristan D. Davis, Gabor J. Dozsa, Alexandre E. Eichenberger, Noel A. Eisley, Matthew R. Ellavsky, Kahn C. Evans, Bruce M. Fleischer, Thomas W. Fox, Alan Gara, Mark E. Giampapa, Thomas M. Gooding, Michael K. Gschwind, John A. Gunnels, Shawn A. Hall, Rudolf A. Haring, Philip Heidelberger, Todd A. Inglett, Brant L. Knudson, Gerard V. Kopcsay, Sameer Kumar, Amith R. Mamidala, James A. Marcella, Mark G. Megerian, Douglas R. Miller, Samuel J. Miller, Adam J. Muff, Michael B. Mundy, John K. O'Brien, Kathryn M. O'Brien, Martin Ohmacht, Jeffrey J. Parker, Ruth J. Poole, Joseph D. Ratterman, Valentina Salapura, David L. Satterfield, Robert M. Senger, Burkhard Steinmacher-Burow, William M. Stockdell, Craig B. Stunkel, Krishnan Sugavanam, Yutaka Sugawara, Todd E. Takken, Barry M. Trager, James L. Van Oosten, Charles D. Wait, Robert E. Walkup, Alfred T. Watson, Robert W. Wisniewski, Peng Wu
-
Patent number: 9170966Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, a sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines whether the head pointer is pointing to the next in-sequence packet. If the head pointer is pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the head pointer. If the head pointer is not pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the next in-sequence packet.Type: GrantFiled: May 6, 2014Date of Patent: October 27, 2015Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9170865Abstract: Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer.Type: GrantFiled: May 28, 2014Date of Patent: October 27, 2015Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
-
Patent number: 9164792Abstract: Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer.Type: GrantFiled: January 6, 2014Date of Patent: October 20, 2015Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
-
Patent number: 9158718Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, a sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines whether the head pointer is pointing to the next in-sequence packet. If the head pointer is pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the head pointer. If the head pointer is not pointing to the next in-sequence packet, the DMA adapter resets the sub-head pointer and the sub-tail pointer to the next in-sequence packet.Type: GrantFiled: January 7, 2014Date of Patent: October 13, 2015Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9152590Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, the sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines that the next in-sequence packet is not in the first FIFO message queue. In response to determining that the next in-sequence packet is not in the first FIFO message queue and that the first FIFO message queue exceeds a threshold capacity, the DMA controller copies the contents of the first FIFO message queue into the second FIFO message queue.Type: GrantFiled: May 14, 2014Date of Patent: October 6, 2015Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9146886Abstract: Deterministic message processing in a direct memory access (DMA) adapter includes the DMA adapter incrementing from a sub-head pointer, the sub-tail pointer until encountering an out-of-sequence packet. The DMA adapter also consumes packets between the sub-head pointer and the sub-tail pointer including incrementing with the consumption of each packet, the sub-head pointer until determining that the sub-head pointer is equal to the sub-tail pointer. In response to determining that the sub-head pointer is equal to the sub-tail pointer, the DMA adapter determines that the next in-sequence packet is not in the first FIFO message queue. In response to determining that the next in-sequence packet is not in the first FIFO message queue and that the first FIFO message queue exceeds a threshold capacity, the DMA controller copies the contents of the first FIFO message queue into the second FIFO message queue.Type: GrantFiled: January 6, 2014Date of Patent: September 29, 2015Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9122514Abstract: Administering message acknowledgements in a parallel computer that includes compute nodes, with each compute node including a processor and a messaging accelerator, includes: storing in a list, by a processor of a compute node, a message descriptor describing a message and an acknowledgement request descriptor describing a request for an acknowledgement of receipt of the message; processing, by a messaging accelerator of the compute node, the list, including transmitting, to a target compute node, the message described by the message descriptor and transmitting, to the target compute node, the request described by the acknowledgement request descriptor; receiving, by the messaging accelerator from the target compute node, an acknowledgement of receipt of the message, including notifying the processor of receipt of the acknowledgement; and removing, by the processor from the list, the message descriptor and the acknowledgment request descriptor.Type: GrantFiled: January 7, 2014Date of Patent: September 1, 2015Assignee: International Business Machines CorporationInventor: Michael A. Blocksome
-
Patent number: 9116750Abstract: Methods, apparatuses, and computer program products for optimizing collective communications within a parallel computer comprising a plurality of hardware threads for executing software threads of a parallel application are provided. Embodiments include a processor of a parallel computer determining for each software thread, an affinity of the software thread to a particular hardware thread. Each affinity indicates an assignment of a software thread to a particular hardware thread. The processor also generates one or more affinity domains based on the affinities of the software threads. Embodiments also include a processor generating, for each affinity domain, a topology of the affinity domain based on the affinities of the software threads to the hardware threads. According to embodiments of the present application, a processor also performs, based on the generated topologies of the affinity domains, a collective operation on one or more software threads.Type: GrantFiled: August 8, 2012Date of Patent: August 25, 2015Assignee: International Business Machines CorporationInventors: Charles J. Archer, Michael A. Blocksome, Joseph D. Ratterman, Brian E. Smith
-
Patent number: 9104512Abstract: Fencing data transfers in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.Type: GrantFiled: November 15, 2012Date of Patent: August 11, 2015Assignee: International Business Machines CorporationInventors: Michael A. Blocksome, Amith R. Mamidala