Patents by Inventor David A. Hrusecky

David A. Hrusecky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230367595
    Abstract: A computer system, processor, programming instructions and/or method for managing operations of a gather buffer for a processor core load storage unit. The processor core includes a processing pipeline having one or more execution units for processing unaligned load instructions that executes in two phases to satisfy. A buffer storage element is provided having a plurality of entries for temporarily collecting partial writeback results retrieved from the memory that are associated with first phase accesses for each of a plurality of unaligned load instructions. An associated logic controller device tracks two parts of the unaligned load to be gathered at independent times, wherein said partial result stored at said buffer storage element comprises a first part of an unaligned load. The second phase load access for the same instruction is independently accessed and later merged with first part of the load data at byte granularity to satisfy the load.
    Type: Application
    Filed: July 25, 2023
    Publication date: November 16, 2023
    Inventors: Kimberly M. Fernsler, Bryan Lloyd, David A. Hrusecky, David Campbell
  • Patent number: 11755324
    Abstract: A computer system, processor, programming instructions and/or method for managing operations of a gather buffer for a processor core load storage unit. The processor core includes a processing pipeline having one or more execution units for processing unaligned load instructions that executes in two phases to satisfy. A buffer storage element is provided having a plurality of entries for temporarily collecting partial writeback results retrieved from the memory that are associated with first phase accesses for each of a plurality of unaligned load instructions. An associated logic controller device tracks two parts of the unaligned load to be gathered at independent times, wherein said partial result stored at said buffer storage element comprises a first part of an unaligned load. The second phase load access for the same instruction is independently accessed and later merged with first part of the load data at byte granularity to satisfy the load.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: September 12, 2023
    Assignee: International Business Machines Corporation
    Inventors: Kimberly M. Fernsler, Bryan Lloyd, David A. Hrusecky, David A. Campbell
  • Patent number: 11748104
    Abstract: Technology for fusing certain load instructions and compare-immediate instructions in a computer processor having a load-store architecture with respect to transferring data between memory and registers of the computer processor. In some embodiments the load and compare-immediate instructions are consecutive. In some embodiments, the instructions are only merged if: (i) the respective RA and RT fields of the two instructions match; (ii) the immediate field of the compare-immediate instruction has a certain value, or falls within a range of certain values; and/or (iii) the instructions are received in a consecutive manner.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: September 5, 2023
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, David A. Hrusecky, Sundeep Chadha, Dung Q. Nguyen, Christian Gerhard Zoellin, Brian W. Thompto, Sheldon Bernard Levenstein, Phillip G. Williams
  • Publication number: 20230063976
    Abstract: A computer system, processor, programming instructions and/or method for managing operations of a gather buffer for a processor core load storage unit. The processor core includes a processing pipeline having one or more execution units for processing unaligned load instructions that executes in two phases to satisfy. A buffer storage element is provided having a plurality of entries for temporarily collecting partial writeback results retrieved from the memory that are associated with first phase accesses for each of a plurality of unaligned load instructions. An associated logic controller device tracks two parts of the unaligned load to be gathered at independent times, wherein said partial result stored at said buffer storage element comprises a first part of an unaligned load. The second phase load access for the same instruction is independently accessed and later merged with first part of the load data at byte granularity to satisfy the load.
    Type: Application
    Filed: August 31, 2021
    Publication date: March 2, 2023
    Inventors: Kimberly M. Fernsler, Bryan Lloyd, David A. Hrusecky, David A. Campbell
  • Patent number: 11379241
    Abstract: System includes at least one computer processor having a load store execution unit (LSU) for processing load and store instructions, wherein the LSU includes (a) a store queue having a plurality of entries for storing data, each store queue entry having a data field for storing the data, the data field having a width for storing the data; and (b) a gather buffer for holding data, wherein the processor is configured to: process oversize data larger than the width of the data field of the store queue, and process an oversize load instruction for oversize data by executing two passes through the LSU, a first pass through the LSU configured to store a first portion of the oversize data in the gather buffer and a second pass through the LSU configured to merge the first portion of the oversize data with a second portion of the oversize data.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: July 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Brian Chen, Kimberly M. Fernsler, Robert A. Cordes, David A. Hrusecky
  • Patent number: 11321088
    Abstract: A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Samuel David Kirchhoff, Brian Chen, Kimberly M. Fernsler, David A. Hrusecky
  • Patent number: 11314510
    Abstract: A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: April 26, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bryan Lloyd, Samuel David Kirchhoff, Brian Chen, Kimberly M. Fernsler, David A. Hrusecky
  • Patent number: 11263151
    Abstract: Translation lookaside buffer (TLB) invalidation using virtual addresses is provided. A cache is searched for a virtual address matching the input virtual address. Based on a matching virtual address in the cache, the corresponding cache entry is invalidated. The load/store queue is searched for a set and a way corresponding to the set and the way of the invalidated cache entry. Based on an entry in the load/store queue matching the set and the way of the invalidated cache entry, the entry in the load/store queue is marked as pending. Indicating a completion of the TLB invalidate instruction is delayed until all pending entries in the load/store queues are complete.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: March 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Bryan Lloyd, David A. Hrusecky, Kimberly M. Fernsler, Jeffrey A. Stuecheli, Guy L. Guthrie, Samuel David Kirchhoff, Robert A. Cordes, Michael J. Mack, Brian Chen
  • Publication number: 20220050680
    Abstract: A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 17, 2022
    Inventors: Bryan Lloyd, Samuel David Kirchhoff, Brian Chen, Kimberly M. Fernsler, David A. Hrusecky
  • Publication number: 20220050681
    Abstract: A computer system, processor, and/or load-store unit has a data cache for storing data, the data cache having a plurality of entries to store the data, each data cache entry addressed by a row and a Way, each data cache row having a plurality of the data cache Ways; a first Address Directory organized and arranged the same as the data cache where each first Address Directory entry is addressed by a row and a Way where each row has a plurality of Ways; a store reorder queue for tracking the store instructions; and a load reorder queue for tracking load instruction. Each of the load and store reorder queues having a Way bit field, preferably less than six bits, for identifying the data cache Way and/or a first Address Directory Way where the Way bit field acts as a proxy for a larger address, e.g. a real page number.
    Type: Application
    Filed: August 25, 2020
    Publication date: February 17, 2022
    Inventors: Bryan Lloyd, Samuel David Kirchhoff, Brian Chen, Kimberly M. Fernsler, David A. Hrusecky
  • Publication number: 20220035631
    Abstract: System includes at least one computer processor having a load store execution unit (LSU) for processing load and store instructions, wherein the LSU includes (a) a store queue having a plurality of entries for storing data, each store queue entry having a data field for storing the data, the data field having a width for storing the data; and (b) a gather buffer for holding data, wherein the processor is configured to: process oversize data larger than the width of the data field of the store queue, and process an oversize load instruction for oversize data by executing two passes through the LSU, a first pass through the LSU configured to store a first portion of the oversize data in the gather buffer and a second pass through the LSU configured to merge the first portion of the oversize data with a second portion of the oversize data.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 3, 2022
    Inventors: Bryan Lloyd, Brian Chen, Kimberly M. Fernsler, Robert A. Cordes, David A. Hrusecky
  • Publication number: 20220035634
    Abstract: Technology for fusing certain load instructions and compare-immediate instructions in a computer processor having a load-store architecture with respect to transferring data between memory and registers of the computer processor. In some embodiments the load and compare-immediate instructions are consecutive. In some embodiments, the instructions are only merged if: (i) the respective RA and RT fields of the two instructions match; (ii) the immediate field of the compare-immediate instruction has a certain value, or falls within a range of certain values; and/or (iii) the instructions are received in a consecutive manner.
    Type: Application
    Filed: July 29, 2020
    Publication date: February 3, 2022
    Inventors: Bryan Lloyd, David A. Hrusecky, Sundeep Chadha, Dung Q. Nguyen, Christian Gerhard Zoellin, Brian W. Thompto, Sheldon Bernard Levenstein, Phillip G. Williams
  • Publication number: 20220035748
    Abstract: Translation lookaside buffer (TLB) invalidation using virtual addresses is provided. A cache is searched for a virtual address matching the input virtual address. Based on a matching virtual address in the cache, the corresponding cache entry is invalidated. The load/store queue is searched for a set and a way corresponding to the set and the way of the invalidated cache entry. Based on an entry in the load/store queue matching the set and the way of the invalidated cache entry, the entry in the load/store queue is marked as pending. Indicating a completion of the TLB invalidate instruction is delayed until all pending entries in the load/store queues are complete.
    Type: Application
    Filed: July 29, 2020
    Publication date: February 3, 2022
    Inventors: David Campbell, Bryan Lloyd, David A. Hrusecky, Kimberly M. Fernsler, Jeffrey A. Stuecheli, Guy L. Guthrie, SAMUEL DAVID KIRCHHOFF, Robert A. Cordes, Michael J. Mack, Brian Chen
  • Patent number: 11061810
    Abstract: A system and method of stopping program execution includes tagging an entry in a virtual cache with an indicator bit where the virtual address of the entry corresponds to a virtual address range in a break point register, in response to a second virtual cache data access demand matching the entry tagged with the indicator bit, determining whether the second data access demand matches the virtual address range of the breakpoint register, and in response to the second data access demand matching the virtual address range of the break point register, flagging an exception and stopping execution of the program. In an embodiment, the method or system enters a slow-mode in response to the second data access demand matching the virtual cache entry with the indicator bit, and performs a full comparison between the second data access demand and the break point register virtual address range.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: July 13, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Campbell, Dwain A. Hicks, David A. Hrusecky, Bryan Lloyd
  • Patent number: 10884742
    Abstract: Handling unaligned load operations, including: receiving a request to load data stored within a range of addresses; determining that the range of addresses includes addresses associated with a plurality of caches, wherein each of the plurality of caches are associated with a distinct processor slice; issuing, to each distinct processor slice, a request to load data stored within a cache associated with the distinct processor slice, wherein the request to load data stored within the cache associated with the distinct processor slice includes a portion of the range of addresses; executing, by each distinct processor slice, the request to load data stored within the cache associated with the distinct processor slice; and receiving, over a plurality of data communications busses, execution results from each distinct processor slice, wherein each data communications busses is associated with one of the distinct processor slices.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Sundeep Chadha, Robert A. Cordes, David A. Hrusecky, Hung Q. Le, Jentje Leenstra, Dung Q. Nguyen, Brian W. Thompto, Albert J. Van Norstrand, Jr.
  • Patent number: 10831481
    Abstract: Handling unaligned load operations, including: receiving a request to load data stored within a range of addresses; determining that the range of addresses includes addresses associated with a plurality of caches, wherein each of the plurality of caches are associated with a distinct processor slice; issuing, to each distinct processor slice, a request to load data stored within a cache associated with the distinct processor slice, wherein the request to load data stored within the cache associated with the distinct processor slice includes a portion of the range of addresses; executing, by each distinct processor slice, the request to load data stored within the cache associated with the distinct processor slice; and receiving, over a plurality of data communications busses, execution results from each distinct processor slice, wherein each data communications busses is associated with one of the distinct processor slices.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Sundeep Chadha, Robert A. Cordes, David A. Hrusecky, Hung Q. Le, Jentje Leenstra, Dung Q. Nguyen, Brian W. Thompto, Albert J. Van Norstrand
  • Patent number: 10761854
    Abstract: Preventing hazard flushes in an instruction sequencing unit of a multi-slice processor including receiving a load instruction in a load reorder queue, wherein the load instruction is an instruction to load data from a memory location; subsequent to receiving the load instruction, receiving a store instruction in a store reorder queue, wherein the store instruction is an instruction to store data in the memory location; determining that the store instruction causes a hazard against the load instruction; preventing a flush of the load reorder queue based on a state of the load instruction; and re-executing the load instruction.
    Type: Grant
    Filed: April 19, 2016
    Date of Patent: September 1, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert A. Cordes, David A. Hrusecky, Elizabeth A. McGlone
  • Publication number: 20200272557
    Abstract: A system and method of stopping program execution includes tagging an entry in a virtual cache with an indicator bit where the virtual address of the entry corresponds to a virtual address range in a break point register, in response to a second virtual cache data access demand matching the entry tagged with the indicator bit, determining whether the second data access demand matches the virtual address range of the breakpoint register, and in response to the second data access demand matching the virtual address range of the break point register, flagging an exception and stopping execution of the program. In an embodiment, the method or system enters a slow-mode in response to the second data access demand matching the virtual cache entry with the indicator bit, and performs a full comparison between the second data access demand and the break point register virtual address range.
    Type: Application
    Filed: February 21, 2019
    Publication date: August 27, 2020
    Inventors: David Campbell, Dwain A. Hicks, David A. Hrusecky, Bryan Lloyd
  • Patent number: 10606600
    Abstract: Techniques are disclosed for receiving an instruction for processing data that includes a plurality of sectors. A method includes decoding the instruction to determine which of the plurality of sectors are needed to process the instruction and fetching at least one of the plurality of sectors from memory. The method includes determining whether each sector that is needed to process the instruction has been fetched. If all sectors needed to process the instruction have been fetched, the method includes transmitting a sector valid signal and processing the instruction. If all sectors needed to process the instruction have not been fetched, the method includes blocking a data valid signal from being transmitted, fetching an additional one or more of the plurality of sectors until all sectors needed to process the instruction have been fetched, transmitting a sector valid signal, and reissuing and processing the instruction using the fetched sectors.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventor: David A. Hrusecky
  • Patent number: 10564978
    Abstract: Operation of a multi-slice processor that includes a plurality of execution slices and a plurality of load/store slices, where each load/store slice includes a load miss queue and a load reorder queue, includes: receiving, at a load reorder queue, a load instruction requesting data; responsive to the data not being stored in a data cache, determining whether a previous load instruction is pending a fetch of a cache line comprising the data; if the cache line does not comprise the data, allocating an entry for the load instruction in the load miss queue; and if the cache line does comprise the data: merging, in the load reorder queue, the load instruction with an entry for the previous load instruction.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: February 18, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kimberly M. Fernsler, David A. Hrusecky, Hung Q. Le, Elizabeth A. McGlone, Brian W. Thompto