Patents by Inventor Srivatsan Srinivasan
Srivatsan Srinivasan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12099448Abstract: A cache memory subsystem includes virtually-indexed L1 and PIPT L2 set-associative caches having an inclusive allocation policy such that: when a first copy of a memory line specified by a physical memory line address (PMLA) is allocated into an L1 entry, a second copy of the line is also allocated into an L2 entry; when the second copy is evicted, the first copy is also evicted. For each value of the PMLA, the second copy can be allocated into only one L2 set, and an associated physical address proxy (PAP) for the PMLA includes a set index and way number that uniquely identifies the entry. For each value of the PMLA there exist two or more different L1 sets into which the first copy can be allocated, and when the L2 evicts the second copy, the L1 uses the PAP of the PMLA to evict the first copy.Type: GrantFiled: May 24, 2022Date of Patent: September 24, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 12093179Abstract: A microprocessor includes a load/store unit that performs store-to-load forwarding, a PIPT L2 set-associative cache, a store queue having store entries, and a load queue having load entries. Each L2 entry is uniquely identified by a set index and a way. Each store/load entry holds, for an associated store/load instruction, a store/load physical address proxy (PAP) for a store/load physical memory line address (PMLA). The store/load PAP specifies the set index and the way of the L2 entry into which a cache line specified by the store/load PMLA is allocated. Each load entry also holds associated load instruction store-to-load forwarding information. The load/store unit compares the store PAP with the load PAP of each valid load entry whose associated load instruction is younger in program order than the store instruction and uses the comparison and associated forwarding information to check store-to-load forwarding correctness with respect to each younger load instruction.Type: GrantFiled: May 18, 2022Date of Patent: September 17, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12086063Abstract: Each load/store queue entry holds a load/store physical address proxy (PAP) for use as a proxy for a load/store physical memory line address (PMLA). The load/store PAP comprises a set index and a way that uniquely identifies an L2 cache entry holding a memory line at the load/store PMLA when an L1 cache provides the load/store PAP during the load/store instruction execution. The microprocessor removes a line at a removal PMLA from an L2 entry, forms a removal PAP as a proxy for the removal PMLA that comprises a set index and a way, snoops the load/store queue with the removal PAP to determine whether the removal PAP is being used as a proxy for the removal PMLA, fills the removed entry with a line at a fill PMLA, and prevents the removal PAP from being used as a proxy for the removal PMLA and the fill PMLA concurrently.Type: GrantFiled: May 18, 2022Date of Patent: September 10, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 12079126Abstract: A microprocessor includes a cache memory, a store queue, and a load/store unit. Each entry of the store queue holds store data associated with a store instruction. The load/store unit, during execution of a load instruction, makes a determination that an entry of the store queue holds store data that includes some but not all bytes of load data requested by the load instruction, cancels execution of the load instruction in response to the determination, and writes to an entry of a structure from which the load instruction is subsequently issuable for re-execution an identifier of a store instruction that is older in program order than the load instruction and an indication that the load instruction is not eligible to re-execute until the identified older store instruction updates the cache memory with store data.Type: GrantFiled: May 18, 2022Date of Patent: September 3, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12079129Abstract: A microprocessor includes a physically-indexed physically-tagged second-level set-associative cache. A set index and a way uniquely identifies each entry. A load/store unit, during store/load instruction execution: detects that a first and second portions of store/load data are to be written/read to/from different first and second lines of memory specified by first and second store physical memory line addresses, writes to a store/load queue entry first and second store physical address proxies (PAPs) for first and second store physical memory line addresses (and all the store data in store execution case). The first and second store PAPs comprise respective set indexes and ways that uniquely identifies respective entries of the second-level cache that holds respective copies of the respective first and second lines of memory. The entries of the store queue are absent storage for holding the first and second store physical memory line addresses.Type: GrantFiled: May 18, 2022Date of Patent: September 3, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12073220Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.Type: GrantFiled: May 18, 2022Date of Patent: August 27, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12061555Abstract: A load/store circuit performs a first lookup of a load virtual address in a virtually-indexed, virtually-tagged first-level data cache (VIVTFLDC) that misses and generates a fill request that causes translation of the load virtual address into a load physical address, receives a response that indicates the load physical address is in a non-cacheable memory region and is without data from the load physical address, allocates a VIVTFLDC data-less entry that includes an indication that the data-less entry is associated with a non-cacheable memory region, performs a second lookup of the load virtual address in the VIVTFLDC and determines the load virtual address hits on the data-less entry, determines from the hit data-less entry it is associated with a non-cacheable memory region, and generates a read request to read data from a processor bus at the load physical address rather than providing data from the hit data-less entry.Type: GrantFiled: May 19, 2023Date of Patent: August 13, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12045619Abstract: A microprocessor includes a load queue, a store queue, and a load/store unit that, during execution of a store instruction, records store information to a store queue entry. The store information comprises store address and store size information about store data to be stored by the store instruction. The load/store unit, during execution of a load instruction that is younger in program order than the store instruction, performs forwarding behavior with respect to forwarding or not forwarding the store data from the store instruction to the load instruction and records load information to a load queue entry, which comprises load address and load size information about load data to be loaded by the load instruction, and records the forwarding behavior in the load queue entry. The load/store unit, during commit of the store instruction, uses the recorded store information and the recorded load information and the recorded forwarding behavior to check correctness of the forwarding behavior.Type: GrantFiled: May 18, 2022Date of Patent: July 23, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 12001337Abstract: A microprocessor includes a physically-indexed physically-tagged second-level set-associative cache. A set index and a way uniquely identifies each entry. A load/store unit, during store/load instruction execution: detects that a first and second portions of store/load data are to be written/read to/from different first and second lines of memory specified by first and second store physical memory line addresses, writes to a store/load queue entry first and second store physical address proxies (PAPs) for first and second store physical memory line addresses (and all the store data in store execution case). The first and second store PAPs comprise respective set indexes and ways that uniquely identifies respective entries of the second-level cache that holds respective copies of the respective first and second lines of memory. The entries of the store queue are absent storage for holding the first and second store physical memory line addresses.Type: GrantFiled: May 18, 2022Date of Patent: June 4, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11989285Abstract: A method and system for mitigating against side channel attacks (SCA) that exploit speculative store-to-load forwarding is described. The method comprises ensuring that the physical load and store addresses match and/or that permissions are present before speculatively store-to-load forwarding. Various improvements maintain a short load-store pipeline, including usage of a virtual level-one data cache (DL1), usage of an inclusive physical level-two data cache (DL2), storage and lookup of physical data address equivalents in the DL1, and using a memory dependence predictor (MDP) to speed up or replace store queue camming of load data addresses against store data addresses.Type: GrantFiled: September 10, 2021Date of Patent: May 21, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11989286Abstract: A method and system for mitigating against side channel attacks (SCA) that exploit speculative store-to-load forwarding is described. The method comprises conditioning store-to-load forwarding on the memory dependence predictor (MDP) being trained for that load instruction. Training involves identifying situations in which store-to-load forwarding could have been performed, but wasn't, and obversely, identifying situations in which store-to-load forwarding was performed but resulted in an error.Type: GrantFiled: January 13, 2022Date of Patent: May 21, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11934519Abstract: A method and system for mitigating against side channel attacks (SCA) that exploit speculative store-to-load forwarding is described. The method comprises conditioning store-to-load forwarding on the memory dependence predictor (MDP) being trained for that load instruction. Training involves identifying situations in which store-to-load forwarding could have been performed, but wasn't, and obversely, identifying situations in which store-to-load forwarding was performed but resulted in an error.Type: GrantFiled: January 13, 2022Date of Patent: March 19, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11907369Abstract: An out-of-order and speculative execution microprocessor that mitigates side channel attacks includes a cache memory and fill request generation logic that generates a request to fill the cache memory with a cache line implicated by a memory address that misses in the cache memory. At least one execution pipeline receives first and second load operations, detects a condition in which the first load generates a need for an architectural exception, the second load misses in the cache memory, and the second load is newer in program order than the first load, and prevents state of the cache memory from being affected by the miss of the second load by inhibiting the fill request generation logic from generating a fill request for the second load or by canceling the fill request for the second load if the fill request generation logic has already generated the fill request for the second load.Type: GrantFiled: August 27, 2020Date of Patent: February 20, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11880721Abstract: A method, including receiving, from a client, a unified query, and extracting, from the unified query, an endpoint query for a first data source on a first server and an endpoint query for a second data source on a second server. The extracted endpoint query for the first data source is forwarded to the first server. Upon receiving a response to the endpoint query forwarded to the first server, one or more parameters are extracted from the response. The endpoint query for the second data source is updated so as to include the extracted one or more parameters, and the updated endpoint query for the second data source is forwarded to the second server. Upon receiving, from the second server, a response to the forwarded endpoint query, a result for the received unified query is generated based on the receive responses, and the generated result is conveyed to the client.Type: GrantFiled: May 25, 2022Date of Patent: January 23, 2024Assignee: R SOFTWARE INC.Inventors: Iddo Gino, Andrey Bukati, Srivatsan Srinivasan
-
Patent number: 11868263Abstract: A microprocessor includes a virtually-indexed L1 data cache that has an allocation policy that permits multiple synonyms to be co-resident. Each L2 entry is uniquely identified by a set index and a way number. A store unit, during a store instruction execution, receives a store physical address proxy (PAP) for a store physical memory line address (PMLA) from an L1 entry hit upon by a store virtual address, and writes the store PAP to a store queue entry. The store PAP comprises the set index and the way number of an L2 entry that holds a line specified by the store PMLA. The store unit, during the store commit, reads the store PAP from the store queue, looks up the store PAP in the L1 to detect synonyms, writes the store data to one or more of the detected synonyms, and evicts the non-written detected synonyms.Type: GrantFiled: May 24, 2022Date of Patent: January 9, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 11868469Abstract: A microprocessor that mitigates side channel attacks includes a front end that processes instructions in program order and a back end that performs speculative execution of instructions out of program order in a superscalar fashion. Producing execution units produce architectural register results during execution of instructions. Consuming execution units consume the produced architectural register results during execution of instructions. The producing and consuming execution units may be the same or different execution units. Control logic detects that, during execution by a producing execution unit, an architectural register result producing instruction causes a need for an architectural exception and consequent flush of all instructions younger in program order than the producing instruction and prevents all instructions within the back end that are dependent upon the producing instruction from consuming the architectural register result produced by the producing instruction.Type: GrantFiled: March 17, 2021Date of Patent: January 9, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11860794Abstract: Each PIPT L2 cache entry is uniquely identified by a set index and a way and holds a generational identifier (GENID). The L2 detects a miss of a physical memory line address (PMLA). An L2 set index is obtained from the PMLA. The L2 picks a way for replacement, increments the GENID held in the entry in the picked way of the selected set, and forms a physical address proxy (PAP) for the PMLA with the obtained set index and the picked way. The PAP uniquely identifies the picked L2 entry. The L2 forms a generational PAP (GPAP) for the PMLA with the PAP and the incremented GENID. A load/store unit makes available the GPAP as a proxy of the PMLA for comparisons with GPAPs of other PMLAs, rather than making comparisons of the PMLA itself with the other PMLAs, to determine whether the PMLA matches the other PMLAs.Type: GrantFiled: May 18, 2022Date of Patent: January 2, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 11853424Abstract: A microprocessor for mitigating side channel attacks includes a memory subsystem that receives a load operation that specifies a load address. The memory subsystem includes a virtually-indexed, virtually-tagged data cache memory (VIVTDCM) comprising entries that hold translation information. The memory subsystem also includes a data translation lookaside buffer (DTLB) comprising entries that hold physical address translations and translation information. The processor performs speculative execution of instructions and executes instructions out of program order. The memory system allows non-inclusion with respect to translation information between the VIVTDCM and the DTLB such that, for instances in time, translation information associated with the load address is present in the VIVTDCM and absent in the DTLB.Type: GrantFiled: October 6, 2020Date of Patent: December 26, 2023Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11841802Abstract: A microprocessor prevents same address load-load ordering violations. Each load queue entry holds a load physical memory line address (PMLA) and an indication of whether a load instruction has completed execution. The microprocessor fills a line specified by a fill PMLA into a cache entry and snoops the load queue with the fill PMLA, either before the fill or in an atomic manner with the fill with respect to ability of the filled entry to be hit upon by any load instruction, to determine whether the fill PMLA matches load PMLAs in load queue entries associated with load instructions that have completed execution and there are other load instructions in the load queue that have not completed execution. The microprocessor, if the condition is true, flushes at least the other load instructions in the load queue that have not completed execution.Type: GrantFiled: May 18, 2022Date of Patent: December 12, 2023Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan
-
Patent number: 11836080Abstract: A L2 cache is set associative, has N ways, and is inclusive of a virtual L1 cache such that when the virtual address misses in the L1: a portion of the virtual address is translated into a physical memory line address (PMLA), the PMLA is allocated into an L2 entry, and a physical address proxy (PAP) for the PMLA is allocated into an L1 entry. The PAP for the PMLA includes a set and a way that uniquely identify the L2 entry. The L2 receives a physical memory line address for allocation, uses a set index portion of the PMLA, and for each L2 way, forms a PAP corresponding to the way. The L1, for each PAP, generates a corresponding indicator of whether the PAP is L1 resident. The L2 selects, for replacement, a way whose indicator indicates the PAP is not resident in the L1.Type: GrantFiled: May 24, 2022Date of Patent: December 5, 2023Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley