Patents by Inventor Thomas Gooding

Thomas Gooding has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240020172
    Abstract: A first device may obtain metrics information associated with a second device, the metrics information indicating a measurement of a performance of a component of the second device, and the metrics information being obtained via a first network. The first device may determine a load of a processing unit of the second device based on the metrics information. The first device may determine, based on the load of the processing unit, whether the second device is capable of executing a portion of a job via a second network different from the first network. The first device may cause the second device to execute the portion of the job via the second network based on determining that the second device is capable of executing the portion of the job via the second network.
    Type: Application
    Filed: July 14, 2022
    Publication date: January 18, 2024
    Applicant: International Business Machines Corporation
    Inventor: Thomas GOODING
  • Patent number: 11301165
    Abstract: A data management system and method for accelerating shared file checkpointing. Written application data is aggregated in an application data file created in a local burst buffer memory at a compute node, and an associated data mapping built index to maintain information related to the offsets into a shared file at which segments of the application data is to be stored in a parallel file system, and where in the buffer those segments are located. The node asynchronously transfers a data file containing the application data and the associated data mapping index to a file server for shared file storage. The data management system and method further accelerates shared file checkpointing in which a shared file, together with a map file that specifies how the shared file is to be distributed, is asynchronously transferred to local burst buffer memories at the nodes to accelerate reading of the shared file.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: April 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Thomas Gooding, Pierre Lemarinier, Bryan S. Rosenburg
  • Publication number: 20190332318
    Abstract: A data management system and method for accelerating shared file checkpointing. Written application data is aggregated in an application data file created in a local burst buffer memory at a compute node, and an associated data mapping built index to maintain information related to the offsets into a shared file at which segments of the application data is to be stored in a parallel file system, and where in the buffer those segments are located. The node asynchronously transfers a data file containing the application data and the associated data mapping index to a file server for shared file storage. The data management system and method further accelerates shared file checkpointing in which a shared file, together with a map file that specifies how the shared file is to be distributed, is asynchronously transferred to local burst buffer memories at the nodes to accelerate reading of the shared file.
    Type: Application
    Filed: April 26, 2018
    Publication date: October 31, 2019
    Inventors: Thomas Gooding, Pierre Lemarinier, Bryan S. Rosenburg
  • Patent number: 8631086
    Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
    Type: Grant
    Filed: September 30, 2008
    Date of Patent: January 14, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
  • Patent number: 8112559
    Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
    Type: Grant
    Filed: September 30, 2008
    Date of Patent: February 7, 2012
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
  • Publication number: 20110173287
    Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
    Type: Application
    Filed: September 30, 2008
    Publication date: July 14, 2011
    Inventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
  • Publication number: 20100082848
    Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.
    Type: Application
    Filed: September 30, 2008
    Publication date: April 1, 2010
    Inventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
  • Publication number: 20070234294
    Abstract: Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
    Type: Application
    Filed: February 23, 2006
    Publication date: October 4, 2007
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Thomas Gooding
  • Publication number: 20060089829
    Abstract: The present invention enhances the Direct Access Stimulus (DAS) interface presently employed within a logic simulation hardware emulator to provide efficient random access to all logic arrays present within a logic model while the emulator is actively cycling. The present invention achieves this by introducing a set of special DAS array port logic within the logic model. This new port logic interfaces with a set of connections on the DAS card interface and provides the control program with efficient random accessibility to all arrays within the design under test (i.e., the logic model).
    Type: Application
    Filed: October 21, 2004
    Publication date: April 27, 2006
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas Gooding, Roy Musselman
  • Publication number: 20060047493
    Abstract: An apparatus and method is disclosed for reducing power consumption in a computing system by moving pages allocated in real memory portions to other real memory portions. When a real memory portion contains no pages, that memory portion can be put into a Deep Power Down (DPD) state that has lower power consumption than when that memory portion is in normal operation. The computing system can also be aware of power consumption of each real memory portion, and, with such awareness, the invention teaches consolidation of pages from real memory portions having relatively higher power consumption to real memory portions having relatively lower power consumption.
    Type: Application
    Filed: September 2, 2004
    Publication date: March 2, 2006
    Applicant: International Business Machines Corporation
    Inventor: Thomas Gooding
  • Publication number: 20050256696
    Abstract: A method, apparatus and program product are provided for increasing the usable memory capacity of a logic simulation hardware emulator. The present invention performs an additional logic synthesis operation during model build to transform an original logical array within a logic model into a transformed logical array, such that a row within the transformed logical array includes a plurality of merged logical array rows from the original logical array. The invention further modifies read and write port logic surrounding the transformed logical array during the logic synthesis operation to support read and write accesses during model emulation run time.
    Type: Application
    Filed: May 13, 2004
    Publication date: November 17, 2005
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas Gooding, Roy Musselman
  • Publication number: 20050171756
    Abstract: The present invention provides a method, apparatus and program-product for a self-healing, reconfigurable logic emulation system, wherein if a signal wire becomes faulty in an emulation cable during an emulation run, the runtime software can automatically reconfigure the emulator to reroute the data destined for the faulty signal wire across a spare wire. Such a feature enables a user to restart the emulation run without having to recompile the simulation model to account for the hardware fault.
    Type: Application
    Filed: January 15, 2004
    Publication date: August 4, 2005
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas Gooding, Roy Musselman