Patents by Inventor Pete Michael Hippleheuser

Pete Michael Hippleheuser has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143516
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for allocation in a victim cache system. An example apparatus includes a first cache storage, a second cache storage, a cache controller coupled to the first cache storage and the second cache storage and operable to receive a memory operation that specifies an address, determine, based on the address, that the memory operation evicts a first set of data from the first cache storage, determine that the first set of data is unmodified relative to an extended memory, and cause the first set of data to be stored in the second cache storage.
    Type: Application
    Filed: January 8, 2024
    Publication date: May 2, 2024
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11940929
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to reduce read-modify-write cycles for non-aligned writes. An example apparatus includes a memory that includes a plurality of memory banks, an interface configured to be coupled to a central processing unit, the interface to obtain a write operation from the central processing unit, wherein the write operation is to write a subset of the plurality of memory banks, and bank processing logic coupled to the interface and to the memory, the bank processing logic to determine the subset of the plurality of memory banks to write based on the write operation, and determine whether to cause a read operation to be performed in response to the write operation based on whether a number of addresses in the subset of the plurality of memory banks to write satisfies a threshold.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: March 26, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11940930
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate atomic operation in victim cache are disclosed. An example system includes a first cache storage to store a first set of data; a second cache storage to store a second set of data that has been evicted from the first cache storage; and a storage queue coupled to the first cache storage and the second cache storage, the storage queue including: an arithmetic component to: receive the second set of data from the second cache storage in response to a memory operation; and perform an arithmetic operation on the second set of data to produce a third set of data; and an arbitration manager to store the third set of data in the second cache storage.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: March 26, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20240095164
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to evict in a dual datapath victim cache system. An example apparatus includes a cache storage, a cache controller operable to receive a first memory operation and a second memory operation concurrently, comparison logic operable to identify if the first and second memory operations missed in the cache storage, and a replacement policy component operable to, when at least one of the first and second memory operations corresponds to a miss in the cache storage, reserve an entry in the cache storage to evict based on the first and second memory operations.
    Type: Application
    Filed: September 15, 2022
    Publication date: March 21, 2024
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20240045803
    Abstract: An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 8, 2024
    Inventors: Abhijeet Ashok CHACHAD, David Matthew THOMPSON, Naveen BHORIA, Pete Michael HIPPLEHEUSER
  • Publication number: 20240028523
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate atomic compare and swap in cache for a coherent level 1 data cache system are disclosed. An example system includes a cache storage; a cache controller coupled to the cache storage wherein the cache controller is operable to: receive a memory operation that specifies a key, a memory address, and a first set of data; retrieve a second set of data corresponding to the memory address; compare the second set of data to the key; based on the second set of data corresponding to the key, cause the first set of data to be stored at the memory address; and based on the second set of data not corresponding to the key, complete the memory operation without causing the first set of data to be stored at the memory address.
    Type: Application
    Filed: September 29, 2023
    Publication date: January 25, 2024
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20240020242
    Abstract: Methods, apparatus, systems and articles of manufacture to reduce bank pressure using aggressive write merging are disclosed. An example apparatus includes a first cache storage; a second cache storage; a store queue coupled to at least one of the first cache storage and the second cache storage and operable to: receive a first memory operation; process the first memory operation for storing the first set of data in at least one of the first cache storage and the second cache storage; receive a second memory operation; and prior to storing the first set of data in the at least one of the first cache storage and the second cache storage, merge the first memory operation and the second memory operation.
    Type: Application
    Filed: July 31, 2023
    Publication date: January 18, 2024
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11868272
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for allocation in a victim cache system. An example apparatus includes a first cache storage, a second cache storage, a cache controller coupled to the first cache storage and the second cache storage and operable to receive a memory operation that specifies an address, determine, based on the address, that the memory operation evicts a first set of data from the first cache storage, determine that the first set of data is unmodified relative to an extended memory, and cause the first set of data to be stored in the second cache storage.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: January 9, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20230401162
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to facilitate fully pipelined read-modify-write support in level 1 data cache using store queue and data forwarding. An example apparatus includes a first storage, a second storage, a store queue coupled to the first storage and the second storage, the store queue operable to receive a first memory operation specifying a first set of data, process the first memory operation for storing the first set of data in at least one of the first storage and the second storage, receive a second memory operation, and prior to storing the first set of data in the at least one of the first storage and the second storage, feedback the first set of data for use in the second memory operation.
    Type: Application
    Filed: August 28, 2023
    Publication date: December 14, 2023
    Inventors: Naveen BHORIA, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20230333991
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate write miss caching in cache system are disclosed. An example apparatus includes a first cache storage; a second cache storage, wherein the second cache storage includes a first portion operable to store a first set of data evicted from the first cache storage and a second portion; a cache controller coupled to the first cache storage and the second cache storage and operable to: receive a write operation; determine that the write operation produces a miss in the first cache storage; and in response to the miss in the first cache storage, provide write miss information associated with the write operation to the second cache storage for storing in the second portion.
    Type: Application
    Filed: June 19, 2023
    Publication date: October 19, 2023
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11789868
    Abstract: An apparatus includes a CPU core and a L1 cache subsystem including a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem including a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller configured to receive a read request from the L1 controller as a single transaction. Read request includes a read address, a first indication of an address and a coherence state of a cache line A to be moved from the L1 main cache to the L1 victim cache to allocate space for data returned in response to the read request, and a second indication of an address and a coherence state of a cache line B to be removed from the L1 victim cache in response to the cache line A being moved to the L1 victim cache.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: October 17, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria, Pete Michael Hippleheuser
  • Patent number: 11775446
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate atomic compare and swap in cache for a coherent level 1 data cache system are disclosed. An example system includes a cache storage; a cache controller coupled to the cache storage wherein the cache controller is operable to: receive a memory operation that specifies a key, a memory address, and a first set of data; retrieve a second set of data corresponding to the memory address; compare the second set of data to the key; based on the second set of data corresponding to the key, cause the first set of data to be stored at the memory address; and based on the second set of data not corresponding to the key, complete the memory operation without causing the first set of data to be stored at the memory address.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: October 3, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20230281126
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to facilitate read-modify-write support in a victim cache. An example apparatus includes a first storage coupled to a controller, a second storage coupled to the controller and parallel coupled to the first storage, and a storage queue coupled to the first storage, the second storage, and to the controller, the storage queue to obtain a memory operation from the controller indicating an address and a first set of data, obtain a second set of data associated with the address from at least one of the first storage and the second storage, merge the first set of data and the second set of data to produce a third set of data, and provide the third set of data for writing to at least one of the first storage and the second storage.
    Type: Application
    Filed: May 1, 2023
    Publication date: September 7, 2023
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11741020
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to facilitate fully pipelined read-modify-write support in level 1 data cache using store queue and data forwarding. An example apparatus includes a first storage, a second storage, a store queue coupled to the first storage and the second storage, the store queue operable to receive a first memory operation specifying a first set of data, process the first memory operation for storing the first set of data in at least one of the first storage and the second storage, receive a second memory operation, and prior to storing the first set of data in the at least one of the first storage and the second storage, feedback the first set of data for use in the second memory operation.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 29, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11714760
    Abstract: Methods, apparatus, systems and articles of manufacture to reduce bank pressure using aggressive write merging are disclosed. An example apparatus includes a first cache storage; a second cache storage; a store queue coupled to at least one of the first cache storage and the second cache storage and operable to: receive a first memory operation; process the first memory operation for storing the first set of data in at least one of the first cache storage and the second cache storage; receive a second memory operation; and prior to storing the first set of data in the at least one of the first cache storage and the second cache storage, merge the first memory operation and the second memory operation.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: August 1, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20230236974
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed facilitate read-modify-write support in a coherent victim cache with parallel data paths. An example apparatus includes a random-access memory configured to be coupled to a central processing unit via a first interface and a second interface, the random-access memory configured to obtain a read request indicating a first address to read via a snoop interface, an address encoder coupled to the random-access memory, the address encoder to, when the random-access memory indicates a hit of the read request, generate a second address corresponding to a victim cache based on the first address, and a multiplexer coupled to the victim cache to transmit a response including data obtained from the second address of the victim cache.
    Type: Application
    Filed: April 3, 2023
    Publication date: July 27, 2023
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11693790
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate write miss caching in cache system are disclosed. An example apparatus includes a first cache storage; a second cache storage, wherein the second cache storage includes a first portion operable to store a first set of data evicted from the first cache storage and a second portion; a cache controller coupled to the first cache storage and the second cache storage and operable to: receive a write operation; determine that the write operation produces a miss in the first cache storage; and in response to the miss in the first cache storage, provide write miss information associated with the write operation to the second cache storage for storing in the second portion.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: July 4, 2023
    Assignee: Texas Instmments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11640357
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to facilitate read-modify-write support in a victim cache. An example apparatus includes a first storage coupled to a controller, a second storage coupled to the controller and parallel coupled to the first storage, and a storage queue coupled to the first storage, the second storage, and to the controller, the storage queue to obtain a memory operation from the controller indicating an address and a first set of data, obtain a second set of data associated with the address from at least one of the first storage and the second storage, merge the first set of data and the second set of data to produce a third set of data, and provide the third set of data for writing to at least one of the first storage and the second storage.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 2, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11636040
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to forward and invalidate inflight data in a store queue. An example apparatus includes a cache storage, a cache controller coupled to the cache storage and operable to receive a first memory operation, determine that the first memory operation corresponds to a read miss in the cache storage, determine a victim address in the cache storage to evict in response to the read miss, issue a read-invalidate command that specifies the victim address, compare the victim address to a set of addresses associated with a set of memory operations being processed by the cache controller, and in response to the victim address matching a first address of the set of addresses corresponding to a second memory operation of the set of memory operations, provide data associated with the second memory operation.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: April 25, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20230108306
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate an atomic operation and/or a histogram operation in cache pipeline are disclosed An example system includes a cache storage coupled to an arithmetic component; and a cache controller coupled to the cache storage, wherein the cache controller is operable to: receive a memory operation that specifies a set of data; retrieve the set of data from the cache storage; utilize the arithmetic component to determine a set of counts of respective values in the set of data; generate a vector representing the set of counts; and provide the vector.
    Type: Application
    Filed: November 22, 2022
    Publication date: April 6, 2023
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser