Patents by Inventor Mohit S. KARVE

Mohit S. KARVE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10942747
    Abstract: Aspects of the invention include tracking relative ages of instructions in a first-in-first-out (FIFO) issue queue of an out-of-order (OoO) processor. The FIFO issue queue is configured to add instructions to the issue queue in a sequential order and to remove instructions from the issue queue in any order including a non-sequential order. The tracking of relative ages of instructions includes maintaining a head pointer to a location of an oldest instruction in the issue queue and a tail pointer to a location of a last instruction added to the issue queue. It is determined periodically whether the tail pointer is pointing to a location that includes a valid instruction. The tail pointer is updated to point to a previous sequential location in the issue queue based at least in part on determining that the tail pointer is not pointing to a location that corresponds to a valid instruction.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: March 9, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mohit S. Karve, Joel A. Silberman, Balaram Sinharoy
  • Patent number: 10922087
    Abstract: Aspects of the invention include tracking relative ages of instructions in an issue queue of an OoO processor. The tracking includes grouping entries in the issue queue into a pool of blocks, each block containing two or more entries that are configured to be allocated and deallocated as a single unit, each entry configured to store an instruction. Blocks are selected in any order from the pool of block for allocation. The selected blocks are allocated and the relative ages of the allocated blocks are tracked based at least in part on an order that the blocks are allocated. Each allocated block is configured as a first-in-first-out (FIFO) queue of entries, configured to add instructions to the block in a sequential order, and configured to remove instructions from the block in any order including a non-sequential order. The relative ages of instructions within each allocated block are tracked.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: February 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mohit S. Karve, Joel A. Silberman, Balaram Sinharoy
  • Patent number: 10915446
    Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level and a memory resource utilization threshold.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Richard J. Eickemeyer, John B. Griswell, Jr., Mohit S. Karve
  • Patent number: 10901744
    Abstract: Aspects of the invention include buffered instruction dispatching to an issue queue. A non-limiting example includes dispatching from a dispatch unit of a processor a first group of instructions selected from a first plurality of instructions to a first issue queue partition of the processor in a first cycle. A second group of instructions selected from the first plurality of instructions is passed to an issue queue buffer of the processor in the first cycle. The second group of instructions is passed from the issue queue buffer to the first issue queue partition in a second cycle. A third group of instructions selected from a second plurality of instructions is dispatched to a second issue queue partition in the second cycle.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mohit S. Karve, Joel A. Silberman, Balaram Sinharoy
  • Patent number: 10776275
    Abstract: A method comprises a cache manager receiving reference attributes associated with network data and selecting a replacement data location of a cache to store cache-line data associated with the network data. The replacement data location is selected based on the reference attributes and an order of reference states stored in a replacement stack of the cache. The stored reference states are associated with respective cached-data stored in the cache and based on reference attributes associated with respective cached-data. The reference states are stored in the replacement stack based on a set of the reference attributes and the stored reference states. In response to receiving reference attributes, the cache manager can modify a stored reference state, determine a second order of the state locations, and store a reference state in the replacement stack based on the second order. A system can comprise a network computing element having a cache, a cache manager, and a replacement stack.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 15, 2020
    Assignee: International Business Machines Corporation
    Inventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
  • Patent number: 10671539
    Abstract: A method comprises receiving input reference attributes from a data reference interface and selecting a replacement data location of a cache to store data. The replacement data location is selected based on the input reference attributes and reference states associated with cached-data stored in data locations of the cache and an order of state locations of a replacement stack storing the reference states. The reference states are based on reference attributes associated with the cached-data and can include a probability count. The order of state locations is based on the reference states and the reference attributes. In response to receiving some input reference attributes, reference states stored in the state locations can be modified and a second order of the state locations can be determined. A reference state can be stored in the replacement stack based on the second order.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: June 2, 2020
    Assignee: International Business Machines Corporation
    Inventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
  • Publication number: 20200117607
    Abstract: A method comprises receiving input reference attributes from a data reference interface and selecting a replacement data location of a cache to store data. The replacement data location is selected based on the input reference attributes and reference states associated with cached-data stored in data locations of the cache and an order of state locations of a replacement stack storing the reference states. The reference states are based on reference attributes associated with the cached-data and can include a probability count. The order of state locations is based on the reference states and the reference attributes. In response to receiving some input reference attributes, reference states stored in the state locations can be modified and a second order of the state locations can be determined. A reference state can be stored in the replacement stack based on the second order.
    Type: Application
    Filed: October 15, 2018
    Publication date: April 16, 2020
    Inventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
  • Publication number: 20200117608
    Abstract: A cache comprises data locations in a storage medium and a set of reference states corresponding to the data locations and abased on reference attributes associated with data stored in the data locations. The cache receives reference information associated with a data reference and selects a data location to store data reference based on a reference state corresponding to the data location. The cache modifies reference states based on reference attributes associated with data references. A method of managing a cache includes receiving reference information associated with a data reference and selecting a data location in a storage medium to store data based on reference attributes associated with data stored in the selected data location. The method can include modifying reference states in response to receiving reference information. The cache and the method can include a count, based on reference attributes, in a reference state.
    Type: Application
    Filed: January 31, 2019
    Publication date: April 16, 2020
    Inventors: Brian W. Thompto, Bernard C. Drerup, Mohit S. Karve
  • Publication number: 20190163485
    Abstract: Aspects of the invention include buffered instruction dispatching to an issue queue. A non-limiting example includes dispatching from a dispatch unit of a processor a first group of instructions selected from a first plurality of instructions to a first issue queue partition of the processor in a first cycle. A second group of instructions selected from the first plurality of instructions is passed to an issue queue buffer of the processor in the first cycle. The second group of instructions is passed from the issue queue buffer to the first issue queue partition in a second cycle. A third group of instructions selected from a second plurality of instructions is dispatched to a second issue queue partition in the second cycle.
    Type: Application
    Filed: November 30, 2017
    Publication date: May 30, 2019
    Inventors: Mohit S. Karve, Joel A. Silberman, Balaram Sinharoy
  • Publication number: 20190163488
    Abstract: Aspects of the invention include tracking relative ages of instructions in a first-in-first-out (FIFO) issue queue of an out-of-order (OoO) processor. The FIFO issue queue is configured to add instructions to the issue queue in a sequential order and to remove instructions from the issue queue in any order including a non-sequential order. The tracking of relative ages of instructions includes maintaining a head pointer to a location of an oldest instruction in the issue queue and a tail pointer to a location of a last instruction added to the issue queue. It is determined periodically whether the tail pointer is pointing to a location that includes a valid instruction. The tail pointer is updated to point to a previous sequential location in the issue queue based at least in part on determining that the tail pointer is not pointing to a location that corresponds to a valid instruction.
    Type: Application
    Filed: November 30, 2017
    Publication date: May 30, 2019
    Inventors: Mohit S. Karve, Joel A. Silberman, Balaram Sinharoy
  • Publication number: 20190163489
    Abstract: Aspects of the invention include tracking relative ages of instructions in an issue queue of an OoO processor. The tracking includes grouping entries in the issue queue into a pool of blocks, each block containing two or more entries that are configured to be allocated and deallocated as a single unit, each entry configured to store an instruction. Blocks are selected in any order from the pool of block for allocation. The selected blocks are allocated and the relative ages of the allocated blocks are tracked based at least in part on an order that the blocks are allocated. Each allocated block is configured as a first-in-first-out (FIFO) queue of entries, configured to add instructions to the block in a sequential order, and configured to remove instructions from the block in any order including a non-sequential order. The relative ages of instructions within each allocated block are tracked.
    Type: Application
    Filed: November 30, 2017
    Publication date: May 30, 2019
    Inventors: Mohit S. Karve, Joel A. Silberman, Balaram Sinharoy
  • Publication number: 20170147493
    Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level and a memory resource utilization threshold.
    Type: Application
    Filed: November 23, 2015
    Publication date: May 25, 2017
    Inventors: Richard J. EICKEMEYER, John B. GRISWELL, JR., Mohit S. KARVE