Patents Assigned to Ventana Micro Systems Inc.
  • Patent number: 12282430
    Abstract: A microprocessor includes a macro-op (MOP) cache (MOC) comprising a set-associative MOC tag RAM (MTR) and a MOC data RAM (MDR) managed as a pool of MDR entries. A MOC entry (ME) comprises one MTR entry and one or more MDR entries that hold the MOPs of the ME. The MDR entries of the ME have a program order. Each MDR entry holds MOPs and a next MDR entry pointer. Each MTR entry holds initial MDR entry pointers and specifies the number of the MDR entries of the ME. During ME allocation, the MOC populates the MDR entry pointers to point to the MDR entries based on the program order. In response to an access that hits upon an MTR entry, the MOC fetches the MDR entries according to the program order initially using the initial pointers and subsequently using the next pointers.
    Type: Grant
    Filed: October 13, 2023
    Date of Patent: April 22, 2025
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael
  • Patent number: 12253951
    Abstract: A microprocessor includes execution units that execute macro-operations (MOPs), a decode unit that decodes architectural instructions into MOPs, an instruction fetch unit (IFU) having an instruction cache that caches architectural instructions and a macro-operation cache (MOC) that caches MOPs into which the architectural instructions are decoded. A prediction unit (PRU) predicts a series of fetch blocks (FBs) in a program instruction stream to be fetched by the IFU from the MOC if hit or from the instruction cache otherwise. A branch target buffer (BTB) caches information about previously fetched and decoded FBs. A counter of each BTB entry is incremented when the entry predicts the associated FB is present again. For each FB in the series, the PRU indicates whether the counter has exceeded a threshold for use deciding whether to allocate the MOPs into the MOC in response to an instance of decoding the instructions into the MOPs.
    Type: Grant
    Filed: August 30, 2023
    Date of Patent: March 18, 2025
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael
  • Patent number: 12229252
    Abstract: A processor and a method are disclosed that mitigate side channel attacks (SCAs) that exploit store-to-load forwarding operations. In one embodiment, the processor detects a translation context change from a first translation context (TC) to a second TC and responsively disallows store-to-load forwarding until all store instructions older than the TC change are committed. The TC comprises an address space identifier (ASID), a virtual machine identifier (VMID), a privilege mode (PM) or a combination of two or more of the ASID, VMID and PM, or a derivative thereof, such as a TC hash, TC generation value, or a RobID associated with the last TC-updating instruction. In other embodiments, TC generation values of load and store instructions are compared or RobIDs of the load and store instructions are compared with the RobID associated with the last TC-updating instruction. If the instructions' RobIDs straddle the TC boundary, store-to-load forwarding is not allowed.
    Type: Grant
    Filed: June 9, 2023
    Date of Patent: February 18, 2025
    Assignee: Ventana Micro Systems Inc.
    Inventor: John G. Favor
  • Patent number: 12216583
    Abstract: A microprocessor includes a macro-op (MOP) cache (MOC) comprising a set-associative MOC tag RAM (MTR) and a MOC data RAM (MDR) managed as a pool of MDR entries. A MOC entry (ME) comprises one MTR entry and one or more MDR entries that hold the MOPs of the ME. The MDR entries of the ME have a program order. Each MDR entry holds MOPs and a next MDR entry pointer. Each MTR entry holds initial MDR entry pointers and specifies the number of the MDR entries of the ME. During ME allocation, the MOC populates the MDR entry pointers to point to the MDR entries based on the program order. In response to an access that hits upon an MTR entry, the MOC fetches the MDR entries according to the program order initially using the initial pointers and subsequently using the next pointers.
    Type: Grant
    Filed: October 13, 2023
    Date of Patent: February 4, 2025
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael
  • Publication number: 20250021155
    Abstract: A power management system with differentiated power scaling including multiple activity monitors and a differentiated power manager. Each activity monitor counts performance benefit estimation events of each of multiple devices to determine performance benefit estimate values, each being a measure of relative power utilization efficiency of a corresponding device. The differentiated power manager periodically evaluates the performance benefit estimate values and dynamically adjusts relative power provided to each device to achieve differentiated power scaling based on the performance benefit estimate values. Additional power available during a higher performance mode may be distributed only to those devices with higher performance benefit estimate values. During any mode of operation, the power provided to those devices with lower performance benefit estimate values may be redirected to those devices with higher performance benefit estimate values.
    Type: Application
    Filed: August 16, 2023
    Publication date: January 16, 2025
    Applicant: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12182019
    Abstract: A L2 set associative cache that is inclusive of an L1 cache. Each entry of a load queue holds a load physical address proxy (PAP) for a load physical memory line address (PMLA) rather than the load PMLA itself. The load PAP comprises the set index and the way that uniquely identifies the L2 entry that holds a memory line specified by the load PMLA. Each load queue entry indicates whether the load instruction has completed execution. The microprocessor removes a memory line at a removal PMLA from an L2 entry and forms a removal PAP as a proxy for the removal PMLA. The removal PAP comprises a set index and a way that uniquely identifies the removed entry. The microprocessor snoops the load queue with the removal PAP to determine whether the removal PAP matches one or more load PAPs in one or more load queue entries associated with one or more load instructions that have completed execution and, if so, signals an abort request.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: December 31, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12118360
    Abstract: A microprocessor that includes a prediction unit (PRU) comprising a branch target buffer (BTB). Each BTB entry is associated with a fetch block (FBlk) (sequential set of instructions starting at a fetch address (FA)) having a length (no longer than a predetermined maximum length) and termination type. The termination type is from a list comprising: a sequential termination type indicating that a FA of a next FBlk in program order is sequential to a last instruction of the FBlk, and one or more non-sequential termination types. The PRU uses the FA of a current FBlk to generate a current BTB lookup value, looks up the current BTB lookup value, and in response to a miss, predicts the current FBlk has the predetermined maximum length and sequential termination type. An instruction fetch unit uses the current FA and predicted predetermined maximum length to fetch the current FBlk from an instruction cache.
    Type: Grant
    Filed: January 5, 2023
    Date of Patent: October 15, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael
  • Patent number: 12117937
    Abstract: A microprocessor includes a virtually-indexed L1 data cache that has an allocation policy that permits multiple synonyms to be co-resident. Each L2 entry is uniquely identified by a set index and a way number. A store unit, during a store instruction execution, receives a store physical address proxy (PAP) for a store physical memory line address (PMLA) from an L1 entry hit upon by a store virtual address, and writes the store PAP to a store queue entry. The store PAP comprises the set index and the way number of an L2 entry that holds a line specified by the store PMLA. The store unit, during the store commit, reads the store PAP from the store queue, looks up the store PAP in the L1 to detect synonyms, writes the store data to one or more of the detected synonyms, and evicts the non-written detected synonyms.
    Type: Grant
    Filed: January 8, 2024
    Date of Patent: October 15, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
  • Patent number: 12118076
    Abstract: A physically-tagged data cache memory mitigates side channel attacks by using a translation context (TC). With each entry allocation, control logic uses the received TC to perform the allocation, and with each access uses the received TC in a hit determination. The TC includes an address space identifier (ASID), virtual machine identifier (VMID), a privilege mode (PM) or translation regime (TR), or combination thereof. The TC is included in a tag of the allocated entry. Alternatively, or additionally, the TC is included in the set index to select a set of entries of the cache memory. Also, the TC may be hashed with address index bits to generate a small tag also included in the allocated entry used to generate an access early miss indication and way select.
    Type: Grant
    Filed: April 3, 2023
    Date of Patent: October 15, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12106111
    Abstract: A prediction unit includes a first predictor that provides an output comprising a hashed fetch address of a current fetch block in response to an input. The first predictor input comprises a hashed fetch address of a previous fetch block that immediately precedes the current fetch block in program execution order. A second predictor provides an output comprising a fetch address of a next fetch block that immediately succeeds the current fetch block in program execution order in response to an input. The second predictor input comprises the hashed fetch address of the current fetch block output by the first predictor.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: October 1, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael
  • Patent number: 12099448
    Abstract: A cache memory subsystem includes virtually-indexed L1 and PIPT L2 set-associative caches having an inclusive allocation policy such that: when a first copy of a memory line specified by a physical memory line address (PMLA) is allocated into an L1 entry, a second copy of the line is also allocated into an L2 entry; when the second copy is evicted, the first copy is also evicted. For each value of the PMLA, the second copy can be allocated into only one L2 set, and an associated physical address proxy (PAP) for the PMLA includes a set index and way number that uniquely identifies the entry. For each value of the PMLA there exist two or more different L1 sets into which the first copy can be allocated, and when the L2 evicts the second copy, the L1 uses the PAP of the PMLA to evict the first copy.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: September 24, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
  • Patent number: 12093179
    Abstract: A microprocessor includes a load/store unit that performs store-to-load forwarding, a PIPT L2 set-associative cache, a store queue having store entries, and a load queue having load entries. Each L2 entry is uniquely identified by a set index and a way. Each store/load entry holds, for an associated store/load instruction, a store/load physical address proxy (PAP) for a store/load physical memory line address (PMLA). The store/load PAP specifies the set index and the way of the L2 entry into which a cache line specified by the store/load PMLA is allocated. Each load entry also holds associated load instruction store-to-load forwarding information. The load/store unit compares the store PAP with the load PAP of each valid load entry whose associated load instruction is younger in program order than the store instruction and uses the comparison and associated forwarding information to check store-to-load forwarding correctness with respect to each younger load instruction.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 17, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12086245
    Abstract: A processor for mitigating side channel attacks includes units that perform fetch, decode, and execution of instructions and pipeline control logic. The processor performs speculative and out-of-order execution of the instructions. The units detect and notify the control unit of events that cause a change from a first translation context (TC) to a second TC. In response, the pipeline control logic prevents speculative execution of instructions that are dependent in their execution on the change to the second TC until all instructions that are dependent on the first TC have completed execution, which may involve stalling their dispatch until all first-TC-dependent instructions have at least completed execution, or by tagging them and dispatching them to execution schedulers but preventing them from starting execution until all first-TC-dependent instructions have at least completed execution.
    Type: Grant
    Filed: September 11, 2023
    Date of Patent: September 10, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, David S. Oliver
  • Patent number: 12086063
    Abstract: Each load/store queue entry holds a load/store physical address proxy (PAP) for use as a proxy for a load/store physical memory line address (PMLA). The load/store PAP comprises a set index and a way that uniquely identifies an L2 cache entry holding a memory line at the load/store PMLA when an L1 cache provides the load/store PAP during the load/store instruction execution. The microprocessor removes a line at a removal PMLA from an L2 entry, forms a removal PAP as a proxy for the removal PMLA that comprises a set index and a way, snoops the load/store queue with the removal PAP to determine whether the removal PAP is being used as a proxy for the removal PMLA, fills the removed entry with a line at a fill PMLA, and prevents the removal PAP from being used as a proxy for the removal PMLA and the fill PMLA concurrently.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 10, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
  • Patent number: 12079126
    Abstract: A microprocessor includes a cache memory, a store queue, and a load/store unit. Each entry of the store queue holds store data associated with a store instruction. The load/store unit, during execution of a load instruction, makes a determination that an entry of the store queue holds store data that includes some but not all bytes of load data requested by the load instruction, cancels execution of the load instruction in response to the determination, and writes to an entry of a structure from which the load instruction is subsequently issuable for re-execution an identifier of a store instruction that is older in program order than the load instruction and an indication that the load instruction is not eligible to re-execute until the identified older store instruction updates the cache memory with store data.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 3, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12079129
    Abstract: A microprocessor includes a physically-indexed physically-tagged second-level set-associative cache. A set index and a way uniquely identifies each entry. A load/store unit, during store/load instruction execution: detects that a first and second portions of store/load data are to be written/read to/from different first and second lines of memory specified by first and second store physical memory line addresses, writes to a store/load queue entry first and second store physical address proxies (PAPs) for first and second store physical memory line addresses (and all the store data in store execution case). The first and second store PAPs comprise respective set indexes and ways that uniquely identifies respective entries of the second-level cache that holds respective copies of the respective first and second lines of memory. The entries of the store queue are absent storage for holding the first and second store physical memory line addresses.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: September 3, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12073220
    Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: August 27, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12061555
    Abstract: A load/store circuit performs a first lookup of a load virtual address in a virtually-indexed, virtually-tagged first-level data cache (VIVTFLDC) that misses and generates a fill request that causes translation of the load virtual address into a load physical address, receives a response that indicates the load physical address is in a non-cacheable memory region and is without data from the load physical address, allocates a VIVTFLDC data-less entry that includes an indication that the data-less entry is associated with a non-cacheable memory region, performs a second lookup of the load virtual address in the VIVTFLDC and determines the load virtual address hits on the data-less entry, determines from the hit data-less entry it is associated with a non-cacheable memory region, and generates a read request to read data from a processor bus at the load physical address rather than providing data from the hit data-less entry.
    Type: Grant
    Filed: May 19, 2023
    Date of Patent: August 13, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12045619
    Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: July 23, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Srivatsan Srinivasan
  • Patent number: 12020032
    Abstract: A prediction unit includes a single-cycle predictor (SCP) configured to provide a series of outputs associated with a respective series of fetch blocks on a first respective series of clock cycles and a fetch block prediction unit (FBPU) configured to use the series of SCP outputs to provide, on a second respective series of clock cycles, a respective series of fetch block descriptors that describe the respective series of fetch blocks. The fetch block descriptors are useable by an instruction fetch unit to fetch the series of fetch blocks from an instruction cache. The second respective series of clock cycles follows the first respective series of clock cycles in a pipelined fashion by a latency of the FBPU.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: June 25, 2024
    Assignee: Ventana Micro Systems Inc.
    Inventors: John G. Favor, Michael N. Michael