Patents by Inventor Shubhendu Sekhar Mukherjee

Shubhendu Sekhar Mukherjee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10558577
    Abstract: Managing memory access requests to a cache system including one or more cache levels that are configured to store cache lines that correspond to memory blocks in a main memory includes: storing stream information identifying recognized streams that were recognized based on previously received memory access requests, where one or more of the recognized streams comprise strided streams that each have an associated strided prefetch result corresponding to a stride that is larger than or equal to a size of a single cache line; and determining whether or not a next cache line prefetch request corresponding to a particular memory access request will be made based at least in part on whether or not the particular memory access request matches a strided prefetch result for at least one strided stream, and a history of past next cache line prefetch requests.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: February 11, 2020
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, David Albert Carlson, Srilatha Manne
  • Patent number: 10540181
    Abstract: Instructions are executed in a pipeline of a processor, where each instruction is associated with a particular context. A first storage stores branch prediction information characterizing results of branch instructions previously executed. The first storage is dynamically partitioned into partitions of one or more entries. Dynamically partitioning includes updating a partition to include an additional entry by associating the additional entry with a particular subset of one or more contexts. A predicted branch result is determined based on at least a portion of the branch prediction information. An actual branch result provided based on an executed branch instruction is used to update the branch prediction information. Providing a predicted branch result for a first branch instruction includes retrieving a first entry from a first partition based at least in part on an identified first subset of one or more contexts associated with the first branch instruction.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: January 21, 2020
    Assignee: Marvell World Trade Ltd.
    Inventors: Shubhendu Sekhar Mukherjee, Richard Eugene Kessler, David Kravitz, Edward McLellan, Rabin Sugumar
  • Patent number: 10445096
    Abstract: Managing lock and unlock operations for a first thread executing on a first processor core includes, for each instruction included in the first thread and identified as being associated with: (1) a lock operation corresponding to a particular lock, in response to determining that the particular lock has already been acquired, continuing to perform the lock operation for multiple attempts during which the first processor core is not able to execute threads other than the first thread, or (2) an unlock operation corresponding to a particular lock, releasing the particular lock from the first thread. Prioritization of selected messages sent over interconnection circuitry configured to connect each processor core to a memory system of the processor is preserved. The selected messages associated with instructions identified as being associated with an unlock operation are prioritized over messages associated with instructions identified as being associated with a lock operation.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: October 15, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Publication number: 20190227802
    Abstract: A predicted branch result is determined based on at least a portion of branch prediction information, which is updated based on an actual branch result, which is provided based on an executed branch instruction. For a first execution of a first branch instruction, the updating includes: computing a randomized value and storing the randomized value in association with an identified subset of one or more contexts that includes a context associated with the first branch instruction, obfuscating the actual branch result based at least in part on the randomized value, and storing a resulting obfuscated value in the branch prediction information. Providing a predicted branch result for a second execution of the first branch instruction includes: retrieving the obfuscated value from the branch prediction information, retrieving the randomized value, and de-obfuscating the obfuscated value using the randomized value to recover the actual branch result as the predicted branch result.
    Type: Application
    Filed: April 26, 2018
    Publication date: July 25, 2019
    Inventors: Richard Eugene KESSLER, Wilson P. SNYDER, II, Shubhendu Sekhar MUKHERJEE
  • Publication number: 20190227803
    Abstract: Instructions are executed in a pipeline of a processor, where each instruction is associated with a particular context. A first storage stores branch prediction information characterizing results of branch instructions previously executed. The first storage is dynamically partitioned into partitions of one or more entries. Dynamically partitioning includes updating a partition to include an additional entry by associating the additional entry with a particular subset of one or more contexts. A predicted branch result is determined based on at least a portion of the branch prediction information. An actual branch result provided based on an executed branch instruction is used to update the branch prediction information. Providing a predicted branch result for a first branch instruction includes retrieving a first entry from a first partition based at least in part on an identified first subset of one or more contexts associated with the first branch instruction.
    Type: Application
    Filed: January 25, 2018
    Publication date: July 25, 2019
    Inventors: Shubhendu Sekhar MUKHERJEE, Richard Eugene KESSLER, David KRAVITZ, Edward MCLELLAN, Rabin SUGUMAR
  • Publication number: 20190227804
    Abstract: Instructions are executed in a pipeline. Storage accessible to the pipeline stores branch prediction information characterizing results of branch instructions previously executed. A predicted branch result is provided, for at least some branch instructions, based on a selected predictor of multiple predictors. An actual branch result is provided based on an executed branch instruction, and the branch prediction information is updated based on the actual branch result. The predictors include: a first predictor that determines the predicted branch result based on at least a portion of the branch prediction information; and a second predictor that determines the predicted branch result independently from the branch prediction information.
    Type: Application
    Filed: January 25, 2018
    Publication date: July 25, 2019
    Inventors: Shubhendu Sekhar Mukherjee, David Kravitz, Edward J. McLellan
  • Patent number: 10339054
    Abstract: Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: July 2, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, Albert Ma, Mike Bertone
  • Patent number: 10331500
    Abstract: Managing lock and unlock operations for a first thread executing on a first processor core includes, for each instruction included in the first thread and identified as being associated with: (1) a lock operation corresponding to a particular lock stored in a particular memory location, in response to determining that the particular lock has already been acquired, continuing to perform the lock operation for multiple attempts using associated operation messages for accessing the particular memory location, or (2) an unlock operation corresponding to a particular lock stored in a particular memory location, releasing the particular lock from the first thread using an associated operation message for accessing the particular memory location. Selected operation messages associated with an unlock operation are prioritized over operation messages associated with a lock operation.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: June 25, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Publication number: 20190138232
    Abstract: A method for managing an observed order of instructions in a computing system includes utilizing an overloaded memory barrier instruction to specify whether a global ordering constraint or a local ordering constraint is enforced.
    Type: Application
    Filed: January 4, 2019
    Publication date: May 9, 2019
    Inventors: Shubhendu Sekhar Mukherjee, Richard Eugene Kessler, Mike Bertone, Chris Comis, Bryan Chin
  • Patent number: 10282299
    Abstract: Partition information includes entries that each include an entity identifier and associated cache configuration information. A controller manages memory requests originating from processor cores, including: comparing at least a portion of an address included in a memory request with tags stored in a cache to determine whether the memory request results in a hit or a miss, and comparing an entity identifier included in the memory request with stored entity identifiers to determine a matched entry. The cache configuration information associated with the entity identifier in a matched entry is updated based at least in part on a hit or miss result. The associated cache configuration information includes cache usage information that tracks usage of the cache by an entity associated with the particular entity identifier, and partition descriptors that each define a different group of one or more of the regions.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: May 7, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, David Asher, Wilson P. Snyder, II
  • Patent number: 10248420
    Abstract: Managing instructions on a processor includes: executing threads having access to a stored library of operations. For a first thread executing on the first processor core, for each instruction included in the first thread and identified as being associated with a lock operation corresponding to a particular lock, the managing includes determining if the particular lock has already been acquired for another thread executing on a processor core other than the first processor core, and if so, continuing to perform the lock operation for multiple attempts using a hardware lock operation different from the lock operation in the stored library, and if not, acquiring the particular lock for the first thread. The hardware lock operation performs a modified atomic operation that changes a result of the hardware lock operation for failed attempts to acquire the particular lock relative to a result of the lock operation in the stored library.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: April 2, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Patent number: 10223279
    Abstract: A translation lookaside buffer stores information indicating respective page sizes for different translations. A virtual-address cache module manages entries, where each entry stores a memory block in association with a virtual address and a code representing at least one page size of a memory page on which the memory block is located. The managing includes: receiving a translation lookaside buffer invalidation instruction for invalidating at least one translation lookaside buffer entry in the translation lookaside buffer, where the translation lookaside buffer invalidation instruction includes at least one invalid virtual address; comparing selected bits of the invalid virtual address with selected bits of each of a plurality of virtual addresses associated with respective entries in the virtual-address cache module, based on the codes; and invalidating one or more entries in the virtual-address cache module based on the comparing.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: March 5, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, Michael Bertone, David Albert Carlson
  • Patent number: 10216430
    Abstract: A method for managing an observed order of instructions in a computing system includes utilizing an overloaded memory barrier instruction to specify whether a global ordering constraint or a local ordering constraint is enforced.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: February 26, 2019
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, Richard Eugene Kessler, Mike Bertone, Chris Comis, Bryan Chin
  • Publication number: 20180373635
    Abstract: Partition information includes entries that each include an entity identifier and associated cache configuration information. A controller manages memory requests originating from processor cores, including: comparing at least a portion of an address included in a memory request with tags stored in a cache to determine whether the memory request results in a hit or a miss, and comparing an entity identifier included in the memory request with stored entity identifiers to determine a matched entry. The cache configuration information associated with the entity identifier in a matched entry is updated based at least in part on a hit or miss result. The associated cache configuration information includes cache usage information that tracks usage of the cache by an entity associated with the particular entity identifier, and partition descriptors that each define a different group of one or more of the regions.
    Type: Application
    Filed: June 23, 2017
    Publication date: December 27, 2018
    Inventors: Shubhendu Sekhar Mukherjee, David Asher, Wilson P. Snyder, II
  • Publication number: 20180300247
    Abstract: Managing memory access requests to a cache system including one or more cache levels that are configured to store cache lines that correspond to memory blocks in a main memory includes: storing stream information identifying recognized streams that were recognized based on previously received memory access requests, where one or more of the recognized streams comprise strided streams that each have an associated strided prefetch result corresponding to a stride that is larger than or equal to a size of a single cache line; and determining whether or not a next cache line prefetch request corresponding to a particular memory access request will be made based at least in part on whether or not the particular memory access request matches a strided prefetch result for at least one strided stream, and a history of past next cache line prefetch requests.
    Type: Application
    Filed: June 18, 2018
    Publication date: October 18, 2018
    Inventors: Shubhendu Sekhar MUKHERJEE, David Albert CARLSON, Srilatha MANNE
  • Publication number: 20180293113
    Abstract: Managing instructions on a processor includes: executing threads having access to a stored library of operations. For a first thread executing on the first processor core, for each instruction included in the first thread and identified as being associated with a lock operation corresponding to a particular lock, the managing includes determining if the particular lock has already been acquired for another thread executing on a processor core other than the first processor core, and if so, continuing to perform the lock operation for multiple attempts using a hardware lock operation different from the lock operation in the stored library, and if not, acquiring the particular lock for the first thread. The hardware lock operation performs a modified atomic operation that changes a result of the hardware lock operation for failed attempts to acquire the particular lock relative to a result of the lock operation in the stored library.
    Type: Application
    Filed: May 31, 2017
    Publication date: October 11, 2018
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Publication number: 20180293100
    Abstract: Managing lock and unlock operations for a first thread executing on a first processor core includes, for each instruction included in the first thread and identified as being associated with: (1) a lock operation corresponding to a particular lock, in response to determining that the particular lock has already been acquired, continuing to perform the lock operation for multiple attempts during which the first processor core is not able to execute threads other than the first thread, or (2) an unlock operation corresponding to a particular lock, releasing the particular lock from the first thread. Prioritization of selected messages sent over interconnection circuitry configured to connect each processor core to a memory system of the processor is preserved. The selected messages associated with instructions identified as being associated with an unlock operation are prioritized over messages associated with instructions identified as being associated with a lock operation.
    Type: Application
    Filed: May 31, 2017
    Publication date: October 11, 2018
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Publication number: 20180293070
    Abstract: Managing instructions on a processor includes: identifying selected instructions as being associated with operations from a stored library of operations. The identifying includes, for instructions included in a particular thread executing on the processor, identifying first/second subsets of the instructions as being associated with a lock/unlock operation based on predetermined characteristics of the instructions. Managing lock/unlock operations associated with the selected instructions that are issued on a first processor core includes, for each instruction included in a first thread and identified as being associated with a lock operation corresponding to a particular lock, in response to determining that the particular lock has already been acquired, continuing to attempt to acquire the particular lock for multiple attempts using a lock operation different from the lock operation in the stored library.
    Type: Application
    Filed: May 31, 2017
    Publication date: October 11, 2018
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Publication number: 20180293114
    Abstract: Managing lock and unlock operations for a first thread executing on a first processor core includes, for each instruction included in the first thread and identified as being associated with: (1) a lock operation corresponding to a particular lock stored in a particular memory location, in response to determining that the particular lock has already been acquired, continuing to perform the lock operation for multiple attempts using associated operation messages for accessing the particular memory location, or (2) an unlock operation corresponding to a particular lock stored in a particular memory location, releasing the particular lock from the first thread using an associated operation message for accessing the particular memory location. Selected operation messages associated with an unlock operation are prioritized over operation messages associated with a lock operation.
    Type: Application
    Filed: September 7, 2017
    Publication date: October 11, 2018
    Inventors: Shubhendu Sekhar Mukherjee, Isam Wadih Akkawi, David Asher, Michael Bertone, David Albert Carlson, Bradley Dobbie, Richard Eugene Kessler
  • Patent number: 10013360
    Abstract: Address translation and caching is managed using a processor that includes at least one CPU configured to run a hypervisor at a first access level and at least one guest operating system at a second access level. The managing includes: at the second access level, translating from virtual addresses to intermediate physical; at the second access level, determining reuse information for ranges of virtual addresses based on estimated reuse of data stored within a virtual address space; at the first access level, translating from the intermediate physical addresses to physical addresses; at the first access level, determining reuse information for ranges of intermediate physical addresses based on estimated reuse of data stored within an intermediate physical address space; and processing reuse information determined at different access levels to store cache lines in selected portions of a first cache.
    Type: Grant
    Filed: March 4, 2015
    Date of Patent: July 3, 2018
    Assignee: Cavium, Inc.
    Inventor: Shubhendu Sekhar Mukherjee