Patents by Inventor Thomas Gooding
Thomas Gooding has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240020172Abstract: A first device may obtain metrics information associated with a second device, the metrics information indicating a measurement of a performance of a component of the second device, and the metrics information being obtained via a first network. The first device may determine a load of a processing unit of the second device based on the metrics information. The first device may determine, based on the load of the processing unit, whether the second device is capable of executing a portion of a job via a second network different from the first network. The first device may cause the second device to execute the portion of the job via the second network based on determining that the second device is capable of executing the portion of the job via the second network.Type: ApplicationFiled: July 14, 2022Publication date: January 18, 2024Applicant: International Business Machines CorporationInventor: Thomas GOODING
-
Patent number: 11301165Abstract: A data management system and method for accelerating shared file checkpointing. Written application data is aggregated in an application data file created in a local burst buffer memory at a compute node, and an associated data mapping built index to maintain information related to the offsets into a shared file at which segments of the application data is to be stored in a parallel file system, and where in the buffer those segments are located. The node asynchronously transfers a data file containing the application data and the associated data mapping index to a file server for shared file storage. The data management system and method further accelerates shared file checkpointing in which a shared file, together with a map file that specifies how the shared file is to be distributed, is asynchronously transferred to local burst buffer memories at the nodes to accelerate reading of the shared file.Type: GrantFiled: April 26, 2018Date of Patent: April 12, 2022Assignee: International Business Machines CorporationInventors: Thomas Gooding, Pierre Lemarinier, Bryan S. Rosenburg
-
Publication number: 20190332318Abstract: A data management system and method for accelerating shared file checkpointing. Written application data is aggregated in an application data file created in a local burst buffer memory at a compute node, and an associated data mapping built index to maintain information related to the offsets into a shared file at which segments of the application data is to be stored in a parallel file system, and where in the buffer those segments are located. The node asynchronously transfers a data file containing the application data and the associated data mapping index to a file server for shared file storage. The data management system and method further accelerates shared file checkpointing in which a shared file, together with a map file that specifies how the shared file is to be distributed, is asynchronously transferred to local burst buffer memories at the nodes to accelerate reading of the shared file.Type: ApplicationFiled: April 26, 2018Publication date: October 31, 2019Inventors: Thomas Gooding, Pierre Lemarinier, Bryan S. Rosenburg
-
Patent number: 8631086Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.Type: GrantFiled: September 30, 2008Date of Patent: January 14, 2014Assignee: International Business Machines CorporationInventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
-
Patent number: 8112559Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.Type: GrantFiled: September 30, 2008Date of Patent: February 7, 2012Assignee: International Business Machines CorporationInventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
-
Publication number: 20110173287Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.Type: ApplicationFiled: September 30, 2008Publication date: July 14, 2011Inventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
-
Publication number: 20100082848Abstract: Embodiments of the invention may be used to manage message queues in a parallel computing environment to prevent message queue deadlock. A direct memory access controller of a compute node may determine when a messaging queue is full. In response, the DMA may generate an interrupt. An interrupt handler may stop the DMA and swap all descriptors from the full messaging queue into a larger queue (or enlarge the original queue). The interrupt handler then restarts the DMA. Alternatively, the interrupt handler stops the DMA, allocates a memory block to hold queue data, and then moves descriptors from the full messaging queue into the allocated memory block. The interrupt handler then restarts the DMA. During a normal messaging advance cycle, a messaging manager attempts to inject the descriptors in the memory block into other messaging queues until the descriptors have all been processed.Type: ApplicationFiled: September 30, 2008Publication date: April 1, 2010Inventors: Michael A. Blocksome, Dong Chen, Thomas Gooding, Philip Heidelberger, Jeff Parker
-
Publication number: 20070234294Abstract: Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.Type: ApplicationFiled: February 23, 2006Publication date: October 4, 2007Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Thomas Gooding
-
Publication number: 20060089829Abstract: The present invention enhances the Direct Access Stimulus (DAS) interface presently employed within a logic simulation hardware emulator to provide efficient random access to all logic arrays present within a logic model while the emulator is actively cycling. The present invention achieves this by introducing a set of special DAS array port logic within the logic model. This new port logic interfaces with a set of connections on the DAS card interface and provides the control program with efficient random accessibility to all arrays within the design under test (i.e., the logic model).Type: ApplicationFiled: October 21, 2004Publication date: April 27, 2006Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Thomas Gooding, Roy Musselman
-
Publication number: 20060047493Abstract: An apparatus and method is disclosed for reducing power consumption in a computing system by moving pages allocated in real memory portions to other real memory portions. When a real memory portion contains no pages, that memory portion can be put into a Deep Power Down (DPD) state that has lower power consumption than when that memory portion is in normal operation. The computing system can also be aware of power consumption of each real memory portion, and, with such awareness, the invention teaches consolidation of pages from real memory portions having relatively higher power consumption to real memory portions having relatively lower power consumption.Type: ApplicationFiled: September 2, 2004Publication date: March 2, 2006Applicant: International Business Machines CorporationInventor: Thomas Gooding
-
Publication number: 20050256696Abstract: A method, apparatus and program product are provided for increasing the usable memory capacity of a logic simulation hardware emulator. The present invention performs an additional logic synthesis operation during model build to transform an original logical array within a logic model into a transformed logical array, such that a row within the transformed logical array includes a plurality of merged logical array rows from the original logical array. The invention further modifies read and write port logic surrounding the transformed logical array during the logic synthesis operation to support read and write accesses during model emulation run time.Type: ApplicationFiled: May 13, 2004Publication date: November 17, 2005Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Thomas Gooding, Roy Musselman
-
Publication number: 20050171756Abstract: The present invention provides a method, apparatus and program-product for a self-healing, reconfigurable logic emulation system, wherein if a signal wire becomes faulty in an emulation cable during an emulation run, the runtime software can automatically reconfigure the emulator to reroute the data destined for the faulty signal wire across a spare wire. Such a feature enables a user to restart the emulation run without having to recompile the simulation model to account for the hardware fault.Type: ApplicationFiled: January 15, 2004Publication date: August 4, 2005Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Thomas Gooding, Roy Musselman