Patents by Inventor Brian W. Thompto

Brian W. Thompto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10379857
    Abstract: A technique for operating a processor includes allocating an entry in a prefetch filter queue (PFQ) for a cache line address (CLA) in response to the CLA missing in an instruction cache. In response to the CLA subsequently hitting in the instruction cache, an associated prefetch value for the entry in the PFQ is updated. In response to the entry being aged-out of the PFQ, an entry in a backing array for the CLA and the associated prefetch value is allocated. In response to subsequently determining that prefetching is required for the CLA, the backing array is accessed to determine the associated prefetch value for the CLA. A cache line at the CLA and a number of sequential cache lines specified by the associated prefetch value in the backing array are then prefetched into the instruction cache.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Richard J. Eickemeyer, Sheldon B. Levenstein, David S. Levitan, Mauricio J. Serrano, Brian W. Thompto
  • Publication number: 20190213133
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices. Operation of such a multi-slice processor includes: determining, by a hypervisor, that consumption of memory controller resources, by a plurality of processing threads, is above a threshold quantity, wherein respective processing threads of the plurality of processing threads control respective prefetch settings; and responsive to determining that the consumption of the memory controller resources is above the threshold quantity, modifying individual memory controller usage of at least one of the plurality of processing threads such that the consumption of the memory controller resources is reduced below the threshold quantity.
    Type: Application
    Filed: March 20, 2019
    Publication date: July 11, 2019
    Inventors: BRADLY G. FREY, GEORGE W. ROHRBAUGH, III, BRIAN W. THOMPTO
  • Patent number: 10331566
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices. Operation of such a multi-slice processor includes: determining, by a hypervisor, that consumption of memory controller resources, by a plurality of processing threads, is above a threshold quantity, wherein respective processing threads of the plurality of processing threads control respective prefetch settings; and responsive to determining that the consumption of the memory controller resources is above the threshold quantity, modifying individual memory controller usage of at least one of the plurality of processing threads such that the consumption of the memory controller resources is reduced below the threshold quantity.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: June 25, 2019
    Assignee: International Business Machines Corporation
    Inventors: Bradly G. Frey, George W. Rohrbaugh, III, Brian W. Thompto
  • Publication number: 20190188133
    Abstract: Techniques are disclosed for performing issue queue snooping for an asynchronous flush and restore of a history buffer (HB) in a processing unit. One technique includes identifying an entry of the HB to restore to a register file in the processing unit. A restore ITAG of the HB entry is sent to the register file via a first restore bus, and restore data of the HB entry and the restore ITAG is sent to the register file via a second restore bus. After the restore ITAG and restore data are sent, an instruction is dispatched before the register file obtains the restore data. After it is determined that the restore data is still available via the second restore bus, a snooping operation is performed to obtain the restore data from the second restore bus for the dispatched instruction.
    Type: Application
    Filed: December 18, 2017
    Publication date: June 20, 2019
    Inventors: David R. TERRY, Dung Q. NGUYEN, Brian W. THOMPTO, Joshua W. BOWMAN, Steven J. BATTLE, Sundeep CHADHA, Brian D. BARRICK, Albert J. VAN NORSTRAND, JR.
  • Publication number: 20190187995
    Abstract: Techniques are disclosed for performing a flush and restore of a history buffer (HB) in a processing unit. One technique inludes identifying one or more entries of the HB to restore to a register file in the processing unit. For each of the one or more HB entries, a determination is made whether to send the HB entry to the register file via a first restore bus or via a second restore bus, different from the first restore bus, based on contents of the HB entry. Each of the one or more HB entries is then sent to the register file via one of the first restore bus or the second restore bus, based on the determination.
    Type: Application
    Filed: December 18, 2017
    Publication date: June 20, 2019
    Inventors: David R. TERRY, Dung Q. NGUYEN, Brian W. THOMPTO, Joshua W. BOWMAN, Steven J. BATTLE, Brian D. BARRICK, Sundeep CHADHA, Albert J. VAN NORSTRAND, JR.
  • Publication number: 20190163486
    Abstract: Aspects include monitoring a number of instructions of a first type dispatched to a first shared port of an issue queue of a processor and determining whether the number of instructions of the first type dispatched to the first shared port exceeds a port selection threshold. An instruction of a third type is dispatched to a second shared port of the issue queue associated with a plurality of instructions of a second type based on determining that the number of instructions of the first type dispatched to the first shared port exceeds the port selection threshold. The instruction of the third type is dispatched to the first shared port of the issue queue associated with a plurality of instructions of the first type based on determining that the number of instructions of the first type dispatched to the first shared port does not exceed the port selection threshold.
    Type: Application
    Filed: November 30, 2017
    Publication date: May 30, 2019
    Inventors: Balaram Sinharoy, Joel A. Silberman, Brian W. Thompto
  • Patent number: 10235181
    Abstract: An out-of-order (OOO) processor includes ready logic that provides a signal indicating an instruction is ready when all operands for the instruction are ready, or when all operands are either ready or are marked back-to-back to a current instruction. By marking a second instruction that consumes an operand as ready when it is back-to-back with a first instruction that produces the operand, but the first instruction has not yet produced the operand, latency due to missed cycles in executing back-to-back instructions is minimized.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: March 19, 2019
    Assignee: International Business Machines Corporation
    Inventor: Brian W. Thompto
  • Patent number: 10223126
    Abstract: An out-of-order (OOO) processor includes ready logic that provides a signal indicating an instruction is ready when all operands for the instruction are ready, or when all operands are either ready or are marked back-to-back to a current instruction. By marking a second instruction that consumes an operand as ready when it is back-to-back with a first instruction that produces the operand, but the first instruction has not yet produced the operand, latency due to missed cycles in executing back-to-back instructions is minimized.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventor: Brian W. Thompto
  • Patent number: 10223257
    Abstract: Apparatus for a garbage collection is disclosed herein. The apparatus includes a processor that includes a load-monitored region register. A memory stores program code, which, when executed on the processor performs an operation for garbage collection, the operation includes specifying a load-monitored region within a memory managed by a runtime environment; enabling a load-monitored event-based branch configured to occur responsive to executing a first type of CPU instruction to load a pointer that points to a first location in the load-monitored region; performing a garbage collection process in background without pausing executing in the runtime environment; executing a CPU instruction of the first type to load a pointer that points to the first location in the load-monitored region; responsive to triggering a load-monitored event-based branch, moving an object pointed to by the pointer with a handler from the first location in memory to a second location in memory.
    Type: Grant
    Filed: July 27, 2015
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 10223266
    Abstract: A load store unit (LSU) in a processor core detects that new data produced by the processor core is ready to be drained to an L2 cache. In response to the LSU detecting that an earlier version of the new data is not stored in L1 cache, a memory controller sends the new data as L1 cache missed data to a store queue (STQ), where the STQ makes data available for deallocation from the STQ to the L2 cache. In response to determining that there is no newer data waiting to be stored in the STQ, or no cache line invalidate to the line containing the store data in the STQ that misses the cache, the memory controller maintains the new data in the STQ with a zombie stat bit that indicates that the new data is a zombie store entry that can be utilized by the processor core.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Robert A. Cordes, Hung Q. Le, Brian W. Thompto
  • Publication number: 20190042239
    Abstract: Managing an issue queue for fused instructions and paired instructions in a microprocessor including dispatching a fused instruction to a first entry in a double issue queue; dispatching two paired instructions to a second entry in the double issue queue; issuing the fused instruction during a single cycle to an execution unit in response to determining, by the issue queue logic, that the fused instruction is ready to issue; and issuing, by the issue queue logic, the first instruction of the two paired instructions to the execution unit in response to determining, by the issue queue logic, that the first instruction of the two paired instructions is ready to issue.
    Type: Application
    Filed: October 27, 2017
    Publication date: February 7, 2019
    Inventors: MICHAEL J. GENDEN, HUNG Q. LE, DUNG Q. NGUYEN, BRIAN W. THOMPTO
  • Publication number: 20190042238
    Abstract: Managing an issue queue for fused instructions and paired instructions in a microprocessor including dispatching a fused instruction to a first entry in a double issue queue; dispatching two paired instructions to a second entry in the double issue queue; issuing the fused instruction during a single cycle to an execution unit in response to determining, by the issue queue logic, that the fused instruction is ready to issue; and issuing, by the issue queue logic, the first instruction of the two paired instructions to the execution unit in response to determining, by the issue queue logic, that the first instruction of the two paired instructions is ready to issue.
    Type: Application
    Filed: August 2, 2017
    Publication date: February 7, 2019
    Inventors: MICHAEL J. GENDEN, HUNG Q. LE, DUNG Q. NGUYEN, BRIAN W. THOMPTO
  • Patent number: 10191845
    Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to and not likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Requests to prefetch data in the plurality of requests that are associated with respective streams with a low prefetch utilization are deprioritized. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level, prefetch utilization, and memory resource utilization.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: January 29, 2019
    Assignee: International Business Machines Corporation
    Inventors: Bernard C. Drerup, Richard J. Eickemeyer, Guy L. Guthrie, Mohit Karve, George W. Rohrbaugh, III, Brian W. Thompto
  • Patent number: 10191847
    Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to and not likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Requests to prefetch data in the plurality of requests that are associated with respective streams with a low prefetch utilization are deprioritized. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level, prefetch utilization, and memory resource utilization.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: January 29, 2019
    Assignee: International Business Machines Corporation
    Inventors: Bernard C. Drerup, Richard J. Eickemeyer, Guy L. Guthrie, Mohit Karve, George W. Rohrbaugh, III, Brian W. Thompto
  • Publication number: 20190026113
    Abstract: Fast issuance and execution of a multi-width instruction across multiple slices in a parallel slice processor core is supported in part through the use of an early notification signal passed between issue logic associated with multiple slices handling that multi-width instruction coupled with an issuance of a different instruction by the originating issue logic for the early notification signal.
    Type: Application
    Filed: September 25, 2018
    Publication date: January 24, 2019
    Inventors: Salma Ayub, Jeffrey C. Brownscheidle, Sundeep Chadha, Dung Q. Nguyen, Tu-An T. Nguyen, Salim A. Shah, Brian W. Thompto
  • Publication number: 20190018679
    Abstract: Converting program instructions for two-stage processors including receiving, by a preprocessing unit, a group of program instructions; determining, by the preprocessing unit, that at least two of the group of program instructions can be converted into a single combined instruction; converting, by the preprocessing unit, the at least two program instructions into the single combined instruction comprising an extension opcode, wherein the extension opcode indicates, to an execution unit, a format of the single combined instruction; and sending, by the preprocessing unit, the single combined instruction to the execution unit.
    Type: Application
    Filed: October 27, 2017
    Publication date: January 17, 2019
    Inventors: GILES R. FRAZIER, HUNG Q. LE, JOSE E. MOREIRA, BRIAN W. THOMPTO
  • Publication number: 20190018677
    Abstract: Converting program instructions for two-stage processors including receiving, by a preprocessing unit, a group of program instructions; determining, by the preprocessing unit, that at least two of the group of program instructions can be converted into a single combined instruction; converting, by the preprocessing unit, the at least two program instructions into the single combined instruction comprising an extension opcode, wherein the extension opcode indicates, to an execution unit, a format of the single combined instruction; and sending, by the preprocessing unit, the single combined instruction to the execution unit.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 17, 2019
    Inventors: GILES R. FRAZIER, HUNG Q. LE, JOSE E. MOREIRA, BRIAN W. THOMPTO
  • Patent number: 10176038
    Abstract: Embodiments described herein include a computing system that permits partial writes into a memory element—e.g., a register on a processor. For example, the data to be written into the memory element may be spread across multiple sources. The register may receive data from two different sources at different times and perform two separate partial write commands to store the data. Embodiments herein generate an ECC value for each of the partial writes. That is, when storing the data of the first partial write, the computing system generates a first ECC value for the data in the first partial write and stores this value in the memory element. Later, when performing the second partial write, the computing system generates a second ECC value for this data which is also stored in the memory element.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: January 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Dhivya Jeganathan, Dung Q. Nguyen, Jose A. Paredes, David R. Terry, Brian W. Thompto
  • Patent number: 10169228
    Abstract: The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 10169046
    Abstract: An instruction sequencing unit in an out-of-order (OOO) processor includes a Most Favored Instruction (MFI) mechanism that designates an instruction as an MFI. The processing queues in the processor identify when they contain the MFI, and assures processing the MFI. The MFI remains the MFI until it is completed or is flushed, and which time the MFI mechanism selects the next MFI.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Maarten J. Boersma, Robert A. Cordes, David A. Hrusecky, Jennifer L. Molnar, Brian W. Thompto, Albert J. Van Norstrand, Jr., Kenneth L. Ward