Patents by Inventor Derek E. Williams
Derek E. Williams has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12204832Abstract: A first plurality of hardware description language (HDL) files describe a hierarchical integrated circuit design utilizing a simplified HDL syntax that omits specification of logical clock connections for at least some entities in the hierarchical integrated circuit design. The hierarchical integrated circuit design as described by the first plurality of HDL files is processed to automatically add logical clock connections for entities in the hierarchical integrated circuit design for which specification of logical clock connections are omitted in the first plurality of HDL files. Based on the processing, a second plurality of HDL files defining the hierarchical integrated circuit design is generated.Type: GrantFiled: September 7, 2021Date of Patent: January 21, 2025Assignee: International Business Machines CorporationInventors: Ali S. El-Zein, Viresh Paruthi, Alvan Wing Ng, Benedikt Geukes, Klaus-Dieter Schubert, Robert Alan Cargnoni, Michael Hemsley Wood, Stephen Gerard Shuma, Wolfgang Roesner, Chung-Lung K. Shum, Edward Armayor McQuade, Derek E. Williams
-
Patent number: 12056050Abstract: A data processing system includes a master, a central request agent, and a plurality of snoopers communicatively coupled to a system fabric for communicating requests subject to retry. The master issues on the system fabric a multicast request intended for the plurality of snoopers. The central request agent receives the multicast request on the system fabric, assigns the multicast request to a particular state machine among a plurality of state machines in the central request agent, and provides the master a coherence response indicating successful completion of the multicast request. The central request agent repetitively issues on the system fabric a multicast request in association with a machine identifier identifying the particular state machine until a coherence response indicates the multicast request is successfully received by all of the plurality of snoopers.Type: GrantFiled: December 21, 2022Date of Patent: August 6, 2024Assignee: International Busi Corporation ess MachinesInventors: Derek E. Williams, Luke Murray, Guy L. Guthrie, Hugh Shen
-
Patent number: 12050798Abstract: A destination host includes a processor core, a system fabric, a memory system, and a link controller communicatively coupled to the system fabric and configured to be communicatively coupled, via a communication link, to a source host with which the destination host is non-coherent. The destination host migrates, via the communication link, a state of a logical partition from the source host to the destination host and page table entries for translating addresses of a dataset of the logical partition from the source host to the destination host. After migrating the state and page table entries, the destination host initiates execution of the logical partition on the processor core while at least a portion of the dataset of the logical partition resides in the memory system of the source host and migrates, via the communication link, the dataset of the logical partition to the memory system of the destination host.Type: GrantFiled: July 29, 2021Date of Patent: July 30, 2024Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, William J. Starke, Jeffrey A. Stuecheli
-
Publication number: 20240220418Abstract: A data processing system includes a master and multiple snoopers communicatively coupled to a system fabric for communicating requests, where the master and snoopers are distributed among a plurality of nodes. The data processing system maintains logical partition (LPAR) information for each of a plurality of LPARs, wherein the LPAR information indicates, for each of the plurality of LPARs, which of the plurality of nodes includes at least one snooper among the plurality of snoopers that holds an address translation entry for that LPAR. Based on the LPAR information, the master selects a broadcast scope of a multicast request on the system fabric, where the broadcast scope includes fewer than all of the plurality of nodes. The master repetitively issues, on the system fabric, the multicast request utilizing the selected broadcast scope until the multicast request is successfully received by all of the plurality of snoopers within the broadcast scope.Type: ApplicationFiled: December 30, 2022Publication date: July 4, 2024Inventors: Derek E. WILLIAMS, Florian Auernhammer
-
Publication number: 20240211398Abstract: A data processing system includes a master, a central request agent, and a plurality of snoopers communicatively coupled to a system fabric for communicating requests subject to retry. The master issues on the system fabric a multicast request intended for the plurality of snoopers. The central request agent receives the multicast request on the system fabric, assigns the multicast request to a particular state machine among a plurality of state machines in the central request agent, and provides the master a coherence response indicating successful completion of the multicast request. The central request agent repetitively issues on the system fabric a multicast request in association with a machine identifier identifying the particular state machine until a coherence response indicates the multicast request is successfully received by all of the plurality of snoopers.Type: ApplicationFiled: December 21, 2022Publication date: June 27, 2024Inventors: Derek E. WILLIAMS, Luke MURRAY, Guy L. GUTHRIE, Hugh SHEN
-
Patent number: 12001343Abstract: A data processing system includes a plurality of processing nodes communicatively coupled to a system fabric. Each of the processing nodes includes a respective plurality of processor cores. Logical partition (LPAR) information for each of a plurality of LPARs is maintained in a register set in each of the processor cores, where the LPAR information indicates, for each of the LPARs, which of the processing nodes may hold an address translation entry for each LPAR. Based on the LPAR information, a first processor core selects a broadcast scope for a multicast request on the system fabric that includes fewer than all of the plurality of processing nodes and issues the multicast request with the selected broadcast scope. The first processor core updates the LPAR information in the register set of a second processor core in another of the plurality of processing nodes via an inter-processor interrupt.Type: GrantFiled: December 30, 2022Date of Patent: June 4, 2024Assignee: International Business Machines CorporationInventors: Derek E. Williams, Florian Auernhammer
-
Patent number: 11915045Abstract: In at least some embodiments, a store-type operation is received and buffered within a store queue entry of a store queue associated with a cache memory of a processor core capable of executing multiple simultaneous hardware threads. A thread identifier indicating a particular hardware thread among the multiple hardware threads that issued the store-type operation is recorded. An indication of whether the store queue entry is a most recently allocated store queue entry for buffering store-type operations of the hardware thread is also maintained. While the indication indicates the store queue entry is a most recently allocated store queue entry for buffering store-type operations of the particular hardware thread, the store queue extends a duration of a store gathering window applicable to the store queue entry. For example, the duration may be extended by decreasing a rate at which the store gathering window applicable to the store queue entry ends.Type: GrantFiled: June 18, 2021Date of Patent: February 27, 2024Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen
-
Patent number: 11748267Abstract: A plurality of entries including address translation information are buffered in a data structure in a processor core. At least first and second translation entry invalidation requests specifying different first and second addresses are checked against all of the entries in the data structure. The checking includes accessing and checking at least a first entry in the data structure for an address match with the first address but not the second address, thereafter concurrently checking at least a second entry for an address match with both the first and second addresses, and thereafter completing checking for the first address and accessing and checking the first entry for an address match with the second address but not the first address. The processor core invalidates any entry in the data structure for which the checking detects an address match.Type: GrantFiled: August 4, 2022Date of Patent: September 5, 2023Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, Luke Murray, Hugh Shen
-
Patent number: 11693788Abstract: An arbiter gathers translation invalidation requests assigned to state machines of a lower-level cache into a set for joint handling in a processor core. The gathering includes selection of one of the set of gathered translation invalidation requests as an end-of-sequence (EOS) request. The arbiter issues to the processor core a sequence of the gathered translation invalidation requests terminating with the EOS request. Based on receipt of each of the gathered requests, the processor core invalidates any translation entries providing translation for the addresses specified by the translation invalidation requests and marks memory-referent requests dependent on the invalidated translation entries. Based on receipt of the EOS request and in response to all of the marked memory-referent requests draining from the processor core, the processor core issues a completion request to the lower-level cache indicating completion of servicing by the processor core of the set of gathered translation invalidation requests.Type: GrantFiled: June 7, 2022Date of Patent: July 4, 2023Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, Luke Murray, Hugh Shen
-
Patent number: 11693776Abstract: A processing unit includes a processor core and an associated cache memory. The cache memory establishes a reservation of a hardware thread of the processor core for a store target address and services a store-conditional request of the processor core by conditionally updating the shared memory with store data based on the whether the hardware thread has a reservation for the store target address. The cache memory receives a hint associated with the store-conditional request indicating an intent of the store-conditional request. The cache memory protects the store target address against access by any conflicting memory access request during a protection window extension following servicing of the store-conditional request. The cache memory establishes a first duration for the protection window extension based on the hint having a first value and establishes a different second duration for the protection window extension based on the hint having a different second value.Type: GrantFiled: June 18, 2021Date of Patent: July 4, 2023Assignee: International Business Machines CorporationInventors: Derek E. Williams, Guy L. Guthrie, Hugh Shen, Jeffrey A. Stuecheli
-
Patent number: 11663381Abstract: A processor receives, as input, a first hardware description language (HDL) file defining an entity of a modular circuit design. The first HDL file instantiates, by a storage element declaration in a hardware description language, a storage element within the entity. The first HDL file omits a port map for the storage element. Based on the first HDL file, the processor automatically fully elaborates a port map for the storage element. The processor stores, in data storage, a derived second HDL file defining the entity and including the port map.Type: GrantFiled: September 7, 2021Date of Patent: May 30, 2023Assignee: International Business Machines CorporationInventors: Stephen Gerard Shuma, Ali S. El-Zein, Wolfgang Roesner, Viresh Paruthi, Benedikt Geukes, Klaus-Dieter Schubert, Birgit Schubert, Stephen John Barnfield, Derek E. Williams
-
Patent number: 11635968Abstract: The present disclosure may include a processor that uses idle caches as a backing store for a boot code. The processor designates a boot core and an active cache from a plurality of cores and a plurality of caches. The processor configures remaining caches from the plurality of caches to act as a backing store memory. The processor modifies the active cache to convert cast outs to a system memory into lateral cast outs to the backing store memory. The processor copies a boot image to the backing store memory and executes the boot image by the boot core.Type: GrantFiled: September 15, 2021Date of Patent: April 25, 2023Assignee: International Business Machines CorporationInventors: Bernard C. Drerup, Guy L. Guthrie, Joseph John McGill, IV, Alexander Michael Taft, Derek E. Williams
-
Patent number: 11615024Abstract: A multiprocessor data processing system includes multiple vertical cache hierarchies supporting a plurality of processor cores, a system memory, and an interconnect fabric coupled to the system memory and the multiple vertical cache hierarchies. Based on a request of a requesting processor core among the plurality of processor cores, a master in the multiprocessor data processing system issues, via the interconnect fabric, a read-type memory access request. The master receives via the interconnect fabric at least one beat of conditional data issued speculatively on the interconnect fabric by a controller of the system memory prior to receipt by the controller of a systemwide coherence response for the read-type memory access request. The master forwards the at least one beat of conditional data to the requesting processor core.Type: GrantFiled: August 4, 2021Date of Patent: March 28, 2023Assignee: International Business Machines CorporationInventors: Derek E. Williams, Michael S. Siegel, Guy L. Guthrie, Bernard C. Drerup
-
Publication number: 20230078861Abstract: The present disclosure may include a processor that uses idle caches as a backing store for a boot code. The processor designates a boot core and an active cache from a plurality of cores and a plurality of caches. The processor configures remaining caches from the plurality of caches to act as a backing store memory. The processor modifies the active cache to convert cast outs to a system memory into lateral cast outs to the backing store memory. The processor copies a boot image to the backing store memory and executes the boot image by the boot core.Type: ApplicationFiled: September 15, 2021Publication date: March 16, 2023Inventors: Bernard C. Drerup, Guy L. Guthrie, Joseph John McGill, IV, Alexander Michael Taft, Derek E. Williams
-
Publication number: 20230075770Abstract: A processor receives, as input, a first hardware description language (HDL) file defining an entity of a modular circuit design. The first HDL file instantiates, by a storage element declaration in a hardware description language, a storage element within the entity. The first HDL file omits a port map for the storage element. Based on the first HDL file, the processor automatically fully elaborates a port map for the storage element. The processor stores, in data storage, a derived second HDL file defining the entity and including the port map.Type: ApplicationFiled: September 7, 2021Publication date: March 9, 2023Inventors: Stephen Gerard Shuma, Ali S. El-Zein, Wolfgang Roesner, Viresh Paruthi, Benedikt Geukes, Klaus-Dieter Schubert, Birgit Schubert, Stephen John Barnfield, Derek E. Williams
-
Publication number: 20230070516Abstract: A first plurality of hardware description language (HDL) files describe a hierarchical integrated circuit design utilizing a simplified HDL syntax that omits specification of logical clock connections for at least some entities in the hierarchical integrated circuit design. The hierarchical integrated circuit design as described by the first plurality of HDL files is processed to automatically add logical clock connections for entities in the hierarchical integrated circuit design for which specification of logical clock connections are omitted in the first plurality of HDL files. Based on the processing, a second plurality of HDL files defining the hierarchical integrated circuit design is generated.Type: ApplicationFiled: September 7, 2021Publication date: March 9, 2023Inventors: Ali S. El-Zein, Viresh Paruthi, Alvan Wing Ng, Benedikt Geukes, Klaus-Dieter Schubert, Robert Alan Cargnoni, Michael Hemsley Wood, Stephen Gerard Shuma, Wolfgang Roesner, Chung-Lung K. Shum, Edward Armayor McQuade, Derek E. Williams
-
Publication number: 20230072735Abstract: A processor receives an expression of design refinement intent with regard to an entity forming a part of a modular circuit design. The entity is defined by a hardware description language (HDL) file, and the expression of design refinement intent identifies an intent region within an implementation of the entity and specifies replacement logic for the region. Based on the expression of design refinement intent, the processor automatically modifies the HDL file by replacing logic within the intent region with the replacement logic. The processor then performs logical synthesis to generate a gate list representation of the modular circuit design as modified.Type: ApplicationFiled: September 7, 2021Publication date: March 9, 2023Inventors: Ali S. El-Zein, Wolfgang Roesner, Stephen Gerard Shuma, Robert Lowell Kanzelman, Michael Hemsley Wood, Chung-Lung K. Shum, Gabor Bobok, Robert James Shadowen, Viresh Paruthi, Derek E. Williams
-
Publication number: 20230063992Abstract: A coherent data processing system includes a system fabric communicatively coupling a plurality of coherence participants and fabric control logic. The fabric control logic quantifies congestion on the system fabric based on coherence messages associated with commands issued on the system fabric. Based on the congestion on the system fabric, the fabric control logic determines a rate of request issuance applicable to a set of coherence participants among the plurality of coherence participants. The fabric control logic issues at least one rate command to set a rate of request issuance to the system fabric of the set of coherence participants.Type: ApplicationFiled: August 18, 2021Publication date: March 2, 2023Inventors: HUGH SHEN, GUY L. GUTHRIE, JEFFREY A. STUECHELI, LUKE MURRAY, ALEXANDER MICHAEL TAFT, BERNARD C. DRERUP, DEREK E. WILLIAMS
-
Publication number: 20230040617Abstract: A data processing system includes a plurality of snoopers, a processing unit including master, and a system fabric communicatively coupling the master and the plurality of snoopers. The master sets a retry operating mode for an interconnect operation in one of alternative first and second operating modes. The first operating mode is associated with a first type of snooper, and the second operating mode is associated with a different second type of snooper. The master issues a memory access request of the interconnect operation on the system fabric of the data processing system. Based on receipt of a combined response representing a systemwide coherence response to the request, the master delays an interval having a duration dependent on the retry operating mode and thereafter reissues the memory access request on the system fabric.Type: ApplicationFiled: August 4, 2021Publication date: February 9, 2023Inventors: DEREK E. WILLIAMS, ALEXANDER MICHAEL TAFT, GUY L. GUTHRIE, BERNARD C. DRERUP
-
Publication number: 20230042778Abstract: A multiprocessor data processing system includes multiple vertical cache hierarchies supporting a plurality of processor cores, a system memory, and an interconnect fabric coupled to the system memory and the multiple vertical cache hierarchies. Based on a request of a requesting processor core among the plurality of processor cores, a master in the multiprocessor data processing system issues, via the interconnect fabric, a read-type memory access request. The master receives via the interconnect fabric at least one beat of conditional data issued speculatively on the interconnect fabric by a controller of the system memory prior to receipt by the controller of a systemwide coherence response for the read-type memory access request. The master forwards the at least one beat of conditional data to the requesting processor core.Type: ApplicationFiled: August 4, 2021Publication date: February 9, 2023Inventors: DEREK E. WILLIAMS, MICHAEL S. SIEGEL, GUY L. GUTHRIE, BERNARD C. DRERUP