Patents by Inventor Brian W. Thompto

Brian W. Thompto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170060678
    Abstract: Embodiments described herein include a computing system that permits partial writes into a memory element—e.g., a register on a processor. For example, the data to be written into the memory element may be spread across multiple sources. The register may receive data from two different sources at different times and perform two separate partial write commands to store the data. Embodiments herein generate an ECC value for each of the partial writes. That is, when storing the data of the first partial write, the computing system generates a first ECC value for the data in the first partial write and stores this value in the memory element. Later, when performing the second partial write, the computing system generates a second ECC value for this data which is also stored in the memory element.
    Type: Application
    Filed: September 1, 2015
    Publication date: March 2, 2017
    Inventors: Dhivya JEGANATHAN, Dung Q. NGUYEN, Jose A. PAREDES, David R. TERRY, Brian W. THOMPTO
  • Publication number: 20170060677
    Abstract: Embodiments described herein include a computing system that permits partial writes into a memory element—e.g., a register on a processor. For example, the data to be written into the memory element may be spread across multiple sources. The register may receive data from two different sources at different times and perform two separate partial write commands to store the data. Embodiments herein generate an ECC value for each of the partial writes. That is, when storing the data of the first partial write, the computing system generates a first ECC value for the data in the first partial write and stores this value in the memory element. Later, when performing the second partial write, the computing system generates a second ECC value for this data which is also stored in the memory element.
    Type: Application
    Filed: September 29, 2015
    Publication date: March 2, 2017
    Inventors: Dhivya JEGANATHAN, Dung Q. NGUYEN, Jose A. PAREDES, David R. TERRY, Brian W. THOMPTO
  • Publication number: 20170060679
    Abstract: Embodiments described herein include a computing system that permits partial writes into a memory element—e.g., a register on a processor. For example, the data to be written into the memory element may be spread across multiple sources. The register may receive data from two different sources at different times and perform two separate partial write commands to store the data. Embodiments herein generate an ECC value for each of the partial writes. That is, when storing the data of the first partial write, the computing system generates a first ECC value for the data in the first partial write and stores this value in the memory element. Later, when performing the second partial write, the computing system generates a second ECC value for this data which is also stored in the memory element.
    Type: Application
    Filed: September 1, 2015
    Publication date: March 2, 2017
    Inventors: Dhivya JEGANATHAN, Dung Q. NGUYEN, Jose A. PAREDES, David R. TERRY, Brian W. THOMPTO
  • Publication number: 20170031417
    Abstract: A method for adjusting a frequency of a processor is disclosed herein. In one embodiment, the method includes determining a total current and a temperature of the multi-core processor and estimating a leakage current for the multi-core processor. The method also includes calculating a switching current by subtracting the leakage current from the total current. The method also includes calculating an effective switching capacitance based at least in part on the switching current. The method also includes calculating a workload activity factor by dividing the effective switching capacitance by a predetermined effective switching capacitance stored in vital product data, and enforcing a turbo frequency limit of the multi-core processor based on the workload activity factor.
    Type: Application
    Filed: August 24, 2015
    Publication date: February 2, 2017
    Inventors: Malcolm S. ALLEN-WARE, Michael S. FLOYD, Joshua D. FRIEDRICH, Charles R. LEFURGY, Kirk D. PETERSON, Karthick RAJAMANI, Srinivasan RAMANI, Todd J. ROSEDAHL, Gregory S. STILL, Brian W. THOMPTO, Victor ZYUBAN
  • Publication number: 20170031814
    Abstract: Method and apparatus for managing memory is disclosed herein. In one embodiment, the method includes specifying a first load-monitored region within a memory, configuring a performance monitor to count object pointer accessed events associated with the first load-monitored region, executing a CPU instruction to load a pointer that points to a first location in the memory, responsive to determining that the first location is within the first load-monitored region, triggering an object pointer accessed event, updating a count of object pointer accessed events in the performance monitor, and performing garbage collection on the first load-monitored region based on the count of object pointer accessed events.
    Type: Application
    Filed: August 24, 2015
    Publication date: February 2, 2017
    Inventors: Giles R. FRAZIER, Michael Karl GSCHWIND, Younes MANTON, Karl M. TAYLOR, Brian W. THOMPTO
  • Publication number: 20170031415
    Abstract: A system for adjusting a frequency of a processor is disclosed herein. The system includes a processor and a memory, where the memory includes a program configured to adjust a frequency of a multi-core processor. The operations include determining a total current and a temperature of the multi-core processor and estimating a leakage current for the multi-core processor. The operations also include calculating a switching current by subtracting the leakage current from the total current and calculating an effective switching capacitance based at least in part on the switching current. The operations also include calculating a workload activity factor by dividing the effective switching capacitance by a predetermined effective switching capacitance stored in vital product data, and enforcing a turbo frequency limit of the multi-core processor based on the workload activity factor.
    Type: Application
    Filed: July 31, 2015
    Publication date: February 2, 2017
    Inventors: Malcolm S. ALLEN-WARE, Michael S. FLOYD, Joshua D. FRIEDRICH, Charles R. LEFURGY, Kirk D. PETERSON, Karthick RAJAMANI, Srinivasan RAMANI, Todd J. ROSEDAHL, Gregory S. STILL, Brian W. THOMPTO, Victor ZYUBAN
  • Publication number: 20170031813
    Abstract: Apparatus for garbage collection is disclosed herein. The apparatus includes a processor that includes a load-monitored region register. A memory stores program code, which, when executed on the processor performs an operation for garbage collection, the operation includes specifying a load-monitored region within a memory managed by a runtime environment; enabling a load-monitored event-based branch configured to occur responsive to executing a first type of CPU instruction to load a pointer that points to a first location in the load-monitored region; performing a garbage collection process in background without pausing executing in the runtime environment; executing a CPU instruction of the first type to load a pointer that points to the first location in the load-monitored region; responsive to triggering a load-monitored event-based branch, moving an object pointed to by the pointer with a handler from the first location in memory to a second location in memory.
    Type: Application
    Filed: July 27, 2015
    Publication date: February 2, 2017
    Inventors: Giles R. FRAZIER, Michael Karl GSCHWIND, Younes MANTON, Karl M. TAYLOR, Brian W. THOMPTO
  • Publication number: 20170031812
    Abstract: Method and apparatus for managing memory is disclosed herein. In one embodiment, the method includes specifying a first load-monitored region within a memory, configuring a performance monitor to count object pointer accessed events associated with the first load-monitored region, executing a CPU instruction to load a pointer that points to a first location in the memory, responsive to determining that the first location is within the first load-monitored region, triggering an object pointer accessed event, updating a count of object pointer accessed events in the performance monitor, and performing garbage collection on the first load-monitored region based on the count of object pointer accessed events.
    Type: Application
    Filed: July 27, 2015
    Publication date: February 2, 2017
    Inventors: Giles R. FRAZIER, Michael Karl GSCHWIND, Younes MANTON, Karl M. TAYLOR, Brian W. THOMPTO
  • Publication number: 20170031817
    Abstract: A method and apparatus for garbage collection is disclosed herein. The method includes specifying a load-monitored region within a memory managed by a run-time environment, enabling a load-monitored event-based branch configured to occur responsive to executing a first type of CPU instruction to load a pointer that points to a first location in the load-monitored region, performing a garbage collection process in background without pausing executing in the run-time environment, executing a CPU instruction of the first type to load a pointer that points to the first location in the load-monitored region, and responsive to triggering a load-monitored event-based branch, moving an object pointed to by the pointer with a handler from the first location in memory to a second location in memory.
    Type: Application
    Filed: August 24, 2015
    Publication date: February 2, 2017
    Inventors: Giles R. FRAZIER, Michael Karl GSCHWIND, Younes MANTON, Karl M. TAYLOR, Brian W. THOMPTO
  • Publication number: 20170004075
    Abstract: The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.
    Type: Application
    Filed: August 24, 2015
    Publication date: January 5, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Publication number: 20170004072
    Abstract: The embodiments relate to a method for managing a garbage collection process. The method includes executing a garbage collection process on a memory block of user address space. A load instruction is run. Running the load instruction includes loading content of a storage location into a processor. The loaded content corresponds to a memory address. It is determined if the garbage collection process is being executed at the memory address. The load instruction is diverted to a process to move an object at the memory address to a location outside of the memory block in response to determining that the garbage collection process is being executed at the first memory address. The load instruction is continued in response to determining that the garbage collection process is not being executed at the memory address.
    Type: Application
    Filed: June 30, 2015
    Publication date: January 5, 2017
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giles R. Frazier, Michael Karl Gschwind, Younes Manton, Karl M. Taylor, Brian W. Thompto
  • Patent number: 9519479
    Abstract: Techniques for increasing vector processing utilization and efficiency through use of unmasked lanes of predicated vector instructions for executing non-conflicting instructions are provided. In one aspect, a method of vector lane predication for a processor is provided which includes the steps of: fetching predicated vector instructions from a memory; decoding the predicated vector instructions; determining if a mask value of the predicated vector instructions is available and, if the mask value of the predicated vector instructions is not available, predicting the mask value of the predicated vector instructions; and dispatching the predicated vector instructions to only masked vector lanes.
    Type: Grant
    Filed: November 18, 2013
    Date of Patent: December 13, 2016
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Hung Q. Le, Jose E. Moreira, Pratap C. Pattnaik, Brian W. Thompto, Jessica H. Tseng
  • Patent number: 9483276
    Abstract: Embodiments relate to management of shared transactional resources. A system includes a transactional facility configured to support transactions that effectively delay committing stores to memory or results to an architectural state until transaction completion. The system includes a processor configured to perform an allocation or arbitration of processing resources to instructions of a transaction within a thread. The processor detects that the transaction has exceeded a manageable capacity of a resource or a potential collision of a transactional instruction storage access has occurred, resulting in a transaction abort. A transaction abort reason and a current configuration are examined to determine whether the transaction abort was based on an initiating program exceeding a restricted limit on the manageable capacity of the resource or an allocation. A processor state is updated to increase a likelihood of success upon retrying the transaction.
    Type: Grant
    Filed: April 23, 2013
    Date of Patent: November 1, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fadi Y. Busaba, Brian W. Thompto
  • Patent number: 9430235
    Abstract: A method and information processing system manage load and store operations that can be executed out-of-order. At least one of a load instruction and a store instruction is executed. A determination is made that an operand store compare hazard has been encountered. An entry within an operand store compare hazard prediction table is created based on the determination. The entry includes at least an instruction address of the instruction that has been executed and a hazard indicating flag associated with the instruction. The hazard indicating flag indicates that the instruction has encountered the operand store compare hazard. When a load instruction is associated with the hazard indicating flag, the load instruction becomes dependent upon all store instructions associated with a substantially similar hazard indicating flag.
    Type: Grant
    Filed: July 29, 2013
    Date of Patent: August 30, 2016
    Assignee: International Business Machines Corporation
    Inventors: Gregory W. Alexander, Khary J. Alexander, Brian Curran, Jonathan T. Hsieh, Christian Jacobi, James R. Mitchell, Brian R. Prasky, Brian W. Thompto
  • Patent number: 9400657
    Abstract: Embodiments relate to dynamic management of a transaction retry indication. One aspect is a system that includes a transactional facility configured to support transactions that effectively delay committing stores to memory or results to an architectural state until transaction completion, and a processor configured to identify a transaction abort reason associated with an aborted transaction of an initiating program. Transaction success and transaction abort history are tracked. Based on determining by the processor that the transaction abort reason was caused by the initiating program, a retry indication is assigned based on a static mapping of the transaction abort reason. Based on determining by the processor that the transaction abort reason was not caused by the initiating program, the retry indication is assigned based on a retry process using the transaction abort reason, the transaction abort history, and a current processor configuration.
    Type: Grant
    Filed: April 23, 2013
    Date of Patent: July 26, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fadi Y. Busaba, Brian W. Thompto
  • Publication number: 20160092231
    Abstract: Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads.
    Type: Application
    Filed: September 30, 2014
    Publication date: March 31, 2016
    Inventors: Sam G. Chu, Markus Kaltenbach, Hung Q. Le, Jentje Leenstra, Jose E. Moreira, Dung Q. Nguyen, Brian W. Thompto
  • Publication number: 20160092276
    Abstract: Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads.
    Type: Application
    Filed: September 29, 2015
    Publication date: March 31, 2016
    Inventors: Sam G. Chu, Markus Kaltenbach, Hung Q. Le, Jentje Leenstra, Jose E. Moreira, Dung Q. Nguyen, Brian W. Thompto
  • Patent number: 9298469
    Abstract: Embodiments relate to implementing processor management of transactions. An aspect includes receiving an instruction from a thread. The instruction includes an instruction type, and executes within a transaction. The transaction effectively delays committing stores to memory until the transaction has completed. A processor manages transaction nesting for the instruction based on the instruction type of the instruction. The transaction nesting includes a maximum processor capacity. The transaction nesting management performs enables executing a sequence of nested transactions within a transaction, supports multiple nested transactions in a processor pipeline, or generates and maintains a set of effective controls for controlling a pipeline. The processor prevents the transaction nesting from exceeding the maximum processor capacity.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: March 29, 2016
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Brian W. Thompto
  • Patent number: 9098653
    Abstract: A simulation environment verifies processor-sparing functions in a simulated processor core. The simulation environment executes a first simulation for a simulated processor core. During the simulation, the simulation environment creates a simulation model dump file. At a later point in time, the simulation environment executes a second simulation for the simulated processor core. The simulation environment saves the state of the simulated processor core. The simulation environment then replaces the state of the simulated processor core by loading the previously created simulation model dump file. The simulation environment then sets the state of the simulated processor core to execute processor-sparing code and resumes the second simulation.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: August 4, 2015
    Assignee: International Business Machines Corporation
    Inventors: Stefan Letz, Joerg Deutschle, Bodo Hoppe, Erica Stuecheli, Brian W. Thompto
  • Publication number: 20150143083
    Abstract: Techniques for increasing vector processing utilization and efficiency through use of unmasked lanes of predicated vector instructions for executing non-conflicting instructions are provided. In one aspect, a method of vector lane predication for a processor is provided which includes the steps of: fetching predicated vector instructions from a memory; decoding the predicated vector instructions; determining if a mask value of the predicated vector instructions is available and, if the mask value of the predicated vector instructions is not available, predicting the mask value of the predicated vector instructions; and dispatching the predicated vector instructions to only masked vector lanes.
    Type: Application
    Filed: November 18, 2013
    Publication date: May 21, 2015
    Applicant: International Business Machines Corporation
    Inventors: Hung Q. Le, Jose E. Moreira, Pratap C. Pattnaik, Brian W. Thompto, Jessica H. Tseng