Patents by Inventor Naveen Bhoria

Naveen Bhoria has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200379757
    Abstract: A digital data processor includes an instruction memory storing instructions each specifying a data processing operation and at least one data operand field, an instruction decoder coupled to the instruction memory for sequentially recalling instructions from the instruction memory and determining the data processing operation and the at least one data operand, and at least one operational unit coupled to a data register file and to an instruction decoder to perform a data processing operation upon at least one operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The operational unit is configured to increment histogram values in response to a histogram instruction by incrementing a bin entry at a specified location in a specified number of at least one histogram.
    Type: Application
    Filed: September 13, 2019
    Publication date: December 3, 2020
    Inventors: Naveen BHORIA, Duc BUI, Rama VENKATASUBRAMANIAN, Dheera Balasubramanian SAMUDRALA, Alan DAVIS
  • Publication number: 20200379762
    Abstract: A digital data processor includes an instruction memory storing instructions specifying data processing operations and a data operand field, an instruction decoder coupled to the instruction memory for recalling instructions from the instruction memory and determining the operation and the data operand, and an operational unit coupled to a data register file and an instruction decoder to perform an operation upon an operand corresponding to an instruction decoded by the instruction decoder and storing results of the operation. The operational unit is configured to perform a table recall in response to a look up table read instruction by recalling data elements from a specified location and adjacent location to the specified location, in a specified number of at least one table and storing the recalled data elements in successive slots in a destination register. Recalled data elements include at least one interpolated data element in the adjacent location.
    Type: Application
    Filed: September 13, 2019
    Publication date: December 3, 2020
    Inventors: Naveen BHORIA, Dheera Balasubramanian SAMUDRALA, Duc BUI, Alan DAVIS
  • Publication number: 20200379761
    Abstract: A digital data processor includes an instruction memory storing instructions each specifying a data processing operation and at least one data operand field, an instruction decoder coupled to the instruction memory for sequentially recalling instructions from the instruction memory and determining the data processing operation and the at least one data operand, and at least one operational unit coupled to a data register file and to the instruction decoder to perform a data processing operation upon at least one operand corresponding to an instruction decoded by the instruction decoder and storing results of the data processing operation. The at least one operational unit is configured to perform a table write in response to a look up table write instruction by writing at least one data element from a source data register to a specified location in a specified number of at least one table.
    Type: Application
    Filed: September 13, 2019
    Publication date: December 3, 2020
    Inventors: Naveen BHORIA, Duc BUI, Dheera Balasubramanian SAMUDRALA
  • Publication number: 20200380035
    Abstract: A digital data processor includes a multi-stage butterfly network, which is configured to, in response to a look up table read instruction, receive look up table data from an intermediate register, reorder the look up table data based on control signals comprising look up table configuration register data, and write the reordered look up table data to a destination register specified by the look up table read instruction.
    Type: Application
    Filed: September 13, 2019
    Publication date: December 3, 2020
    Inventors: Naveen BHORIA, Duc BUI, Dheera Balasubramanian SAMUDRALA, Rama VENKATASUBRAMANIAN
  • Publication number: 20200371946
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to reduce read-modify-write cycles for non-aligned writes. An example apparatus includes a memory that includes a plurality of memory banks, an interface configured to be coupled to a central processing unit, the interface to obtain a write operation from the central processing unit, wherein the write operation is to write a subset of the plurality of memory banks, and bank processing logic coupled to the interface and to the memory, the bank processing logic to determine the subset of the plurality of memory banks to write based on the write operation, and determine whether to cause a read operation to be performed in response to the write operation based on whether a number of addresses in the subset of the plurality of memory banks to write satisfies a threshold.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371948
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to forward and invalidate inflight data in a store queue. An example apparatus includes a cache storage, a cache controller coupled to the cache storage and operable to receive a first memory operation, determine that the first memory operation corresponds to a read miss in the cache storage, determine a victim address in the cache storage to evict in response to the read miss, issue a read-invalidate command that specifies the victim address, compare the victim address to a set of addresses associated with a set of memory operations being processed by the cache controller, and in response to the victim address matching a first address of the set of addresses corresponding to a second memory operation of the set of memory operations, provide data associated with the second memory operation.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371964
    Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing cache data evicted from the first sub-cache and write-memory commands that are not cached in the first sub-cache, and a cache controller configured to receive two or more cache commands, determine a conflict exists between the received two or more cache commands, determine a conflict resolution between the received two or more cache commands, and sending the two or more cache commands to the first sub-cache and the second sub-cache.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen BHORIA, Timothy David ANDERSON, Pete HIPPLEHEUSER
  • Publication number: 20200371939
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to facilitate fully pipelined read-modify-write support in level 1 data cache using store queue and data forwarding. An example apparatus includes a first storage, a second storage, a store queue coupled to the first storage and the second storage, the store queue operable to receive a first memory operation specifying a first set of data, process the first memory operation for storing the first set of data in at least one of the first storage and the second storage, receive a second memory operation, and prior to storing the first set of data in the at least one of the first storage and the second storage, feedback the first set of data for use in the second memory operation.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371956
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to facilitate read-modify-write support in a victim cache. An example apparatus includes a first storage coupled to a controller, a second storage coupled to the controller and parallel coupled to the first storage, and a storage queue coupled to the first storage, the second storage, and to the controller, the storage queue to obtain a memory operation from the controller indicating an address and a first set of data, obtain a second set of data associated with the address from at least one of the first storage and the second storage, merge the first set of data and the second set of data to produce a third set of data, and provide the third set of data for writing to at least one of the first storage and the second storage.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371930
    Abstract: A system includes a non-coherent component; a coherent, non-caching component; a coherent, caching component; and a level two (L2) cache subsystem coupled to the non-coherent component, the coherent, non-caching component, and the coherent, caching component. The L2 cache subsystem includes a L2 cache; a shadow level one (L1) main cache; a shadow L1 victim cache; and a L2 controller. The L2 controller is configured to receive and process a first transaction from the non-coherent component; receive and process a second transaction from the coherent, non-caching component; and receive and process a third transaction from the coherent, caching component.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Abhijeet Ashok CHACHAD, David Matthew THOMPSON, Naveen BHORIA
  • Publication number: 20200371932
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate write miss caching in cache system are disclosed. An example apparatus includes a first cache storage; a second cache storage, wherein the second cache storage includes a first portion operable to store a first set of data evicted from the first cache storage and a second portion; a cache controller coupled to the first cache storage and the second cache storage and operable to: receive a write operation; determine that the write operation produces a miss in the first cache storage; and in response to the miss in the first cache storage, provide write miss information associated with the write operation to the second cache storage for storing in the second portion.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371938
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for read-modify-write support in multi-banked data RAM cache for bank arbitration. An example data cache system includes a store queue including a plurality of bank queues including a first bank queue having a write and read port configured to receive a respective write and read operation, storage coupled to the store queue including a plurality of data banks including a first data bank having a first port configured to receive the write or the read operation, first through third multiplexers, and bank arbitration logic including first arbiters including a first arbiter and second arbiters including a second arbiter, the first arbiter coupled to the second arbiter, the second and third multiplexers, the second arbiter coupled to the first multiplexer.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371915
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate an atomic operation and/or a histogram operation in cache pipeline are disclosed An example system includes a cache storage coupled to an arithmetic component; and a cache controller coupled to the cache storage, wherein the cache controller is operable to: receive a memory operation that specifies a set of data; retrieve the set of data from the cache storage; utilize the arithmetic component to determine a set of counts of respective values in the set of data; generate a vector representing the set of counts; and provide the vector.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371920
    Abstract: An apparatus including a CPU core and a L1 cache subsystem coupled to the CPU core. The L1 cache subsystem includes a L1 main cache, a L1 victim cache, and a L1 controller. The apparatus includes a L2 cache subsystem coupled to the L1 cache subsystem. The L2 cache subsystem includes a L2 main cache, a shadow L1 main cache, a shadow L1 victim cache, and a L2 controller. The L2 controller receives an indication from the L1 controller that a cache line A is being relocated from the L1 main cache to the L1 victim cache; in response to the indication, update the shadow L1 main cache to reflect that the cache line A is no longer located in the L1 main cache; and in response to the indication, update the shadow L1 victim cache to reflect that the cache line A is located in the L1 victim cache.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Abhijeet Ashok CHACHAD, David Matthew THOMPSON, Naveen BHORIA
  • Publication number: 20200371916
    Abstract: Techniques for caching data are provided that include receiving, by a caching system, a write memory command for a memory address, the write memory command associated with a first color tag, determining, by a first sub-cache of the caching system, that the memory address is not cached in the first sub-cache, determining, by second sub-cache of the caching system, that the memory address is not cached in the second sub-cache, storing first data associated with the first write memory command in a cache line of the second sub-cache, storing the first color tag in the second sub-cache, receiving a second write memory command for the cache line, the write memory command associated with a second color tag, merging the second color tag with the first color tag, storing the merged color tag, and evicting the cache line based on the merged color tag.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen BHORIA, Timothy David ANDERSON, Pete HIPPLEHEUSER
  • Publication number: 20200371919
    Abstract: A method includes determining, by a level one (L1) controller, to change a size of a L1 main cache; servicing, by the L1 controller, pending read requests and pending write requests from a central processing unit (CPU) core; stalling, by the L1 controller, new read requests and new write requests from the CPU core; writing back and invalidating, by the L1 controller, the L1 main cache. The method also includes receiving, by a level two (L2) controller, an indication that the L1 main cache has been invalidated and, in response, flushing a pipeline of the L2 controller; in response to the pipeline being flushed, stalling, by the L2 controller, requests received from any master; reinitializing, by the L2 controller, a shadow L1 main cache. Reinitializing includes clearing previous contents of the shadow L1 main cache and changing the size of the shadow L1 main cache.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Abhijeet Ashok CHACHAD, Naveen BHORIA, David Matthew THOMPSON, Neelima MURALIDHARAN
  • Publication number: 20200371911
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed facilitate read-modify-write support in a coherent victim cache with parallel data paths. An example apparatus includes a random-access memory configured to be coupled to a central processing unit via a first interface and a second interface, the random-access memory configured to obtain a read request indicating a first address to read via a snoop interface, an address encoder coupled to the random-access memory, the address encoder to, when the random-access memory indicates a hit of the read request, generate a second address corresponding to a victim cache based on the first address, and a multiplexer coupled to the victim cache to transmit a response including data obtained from the second address of the victim cache.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371927
    Abstract: In described examples, a coherent memory system includes a central processing unit (CPU) and first and second level caches. The CPU is arranged to execute program instructions to manipulate data in at least a first or second secure context. Each of the first and second caches stores a secure code for indicating the at least first or second secure contexts by which data for a respective cache line is received. The first and second level caches maintain coherency in response to comparing the secure codes of respective lines of cache and executing a cache coherency operation in response.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Abhijeet Ashok Chachad, David Matthew Thompson, Naveen Bhoria
  • Publication number: 20200371949
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate atomic operation in victim cache are disclosed. An example system includes a first cache storage to store a first set of data; a second cache storage to store a second set of data that has been evicted from the first cache storage; and a storage queue coupled to the first cache storage and the second cache storage, the storage queue including: an arithmetic component to: receive the second set of data from the second cache storage in response to a memory operation; and perform an arithmetic operation on the second set of data to produce a third set of data; and an arbitration manager to store the third set of data in the second cache storage.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20200371921
    Abstract: Methods, apparatus, systems and articles of manufacture to reduce bank pressure using aggressive write merging are disclosed. An example apparatus includes a first cache storage; a second cache storage; a store queue coupled to at least one of the first cache storage and the second cache storage and operable to: receive a first memory operation; process the first memory operation for storing the first set of data in at least one of the first cache storage and the second cache storage; receive a second memory operation; and prior to storing the first set of data in the at least one of the first cache storage and the second cache storage, merge the first memory operation and the second memory operation.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 26, 2020
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser