Patents by Inventor Debasish Chandra

Debasish Chandra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12175248
    Abstract: Disclosed techniques relate to re-use of speculative results from an incorrect execution path. In some embodiments, when a control transfer instruction is mispredicted, a load instruction may have been executed on the wrong path. In disclosed embodiments, result storage circuitry records information that indicates destination registers of speculatively-executed load instructions including a first load instruction. Control flow tracker circuitry may store information indicating a reconvergence point for the control transfer instruction.
    Type: Grant
    Filed: April 21, 2023
    Date of Patent: December 24, 2024
    Assignee: Apple Inc.
    Inventors: Yuan C. Chou, Deepankar Duggal, Debasish Chandra, Niket K Choudhary, Richard F. Russo
  • Patent number: 12159142
    Abstract: Techniques are disclosed relating to predicting values for load operations. In some embodiments, front-end circuitry is configured to predict values of load operations based on multiple value tagged geometric length predictor (VTAGE) prediction tables (based on program counter information and branch history information). Training circuitry may adjust multiple VTAGE learning tables based on completed load operations. Control circuitry may pre-compute access information (e.g., an index) for a VTAGE learning table for a load based on branch history information that is available to the front-end circuitry but that is unavailable to the training circuitry, store the pre-computed access information, and provide the pre-computed access information from the first storage circuitry to the training circuitry to access the VTAGE learning table based on completion of the load. This may facilitate VTAGE training without pipelining the branch history information.
    Type: Grant
    Filed: May 2, 2023
    Date of Patent: December 3, 2024
    Assignee: Apple Inc.
    Inventors: Yuan C. Chou, Chang Xu, Deepankar Duggal, Debasish Chandra
  • Publication number: 20240362027
    Abstract: Techniques are disclosed relating to load value prediction. In some embodiments, a processor includes load address prediction circuitry and load value prediction circuitry. Training circuitry may train loads in a given entry, and may include a first entry configured to store first predicted load address information and a confidence indication of confidence that the first predicted load address information is correct and a second entry configured to store first predicted load value information and a confidence indication of confidence that the first predicted load value information is correct (note a given entry may be configured to load or value prediction at different times). Control circuitry may, in response to an entry in the training circuitry reaching a threshold level of confidence, allocate a corresponding entry in either the load value prediction circuitry or the load address prediction circuitry.
    Type: Application
    Filed: July 5, 2024
    Publication date: October 31, 2024
    Inventors: Yuan C. Chou, Debasish Chandra, Mridul Agarwal, Haoyan Jia
  • Publication number: 20240354111
    Abstract: Disclosed techniques relate to re-use of speculative results from an incorrect execution path. In some embodiments, when a first control transfer instruction is mispredicted, a second control transfer instruction may have been executed on the wrong path because of the misprediction. Result storage circuitry may record information indicating a determined direction for the second control transfer instruction. Control flow tracker circuitry may store, for the first control transfer instruction, information indicating a reconvergence point. Re-use control circuitry may track registers written by instructions prior to the reconvergence point, determine, based on the tracked registers, that the second control transfer instruction does not depend on data from any instruction between the first control transfer instruction and the reconvergence point, and use the recorded determined direction for the second control transfer instruction, notwithstanding the misprediction of the first control transfer instruction.
    Type: Application
    Filed: April 21, 2023
    Publication date: October 24, 2024
    Inventors: Yuan C. Chou, Deepankar Duggal, Debasish Chandra, Niket K. Choudhary, Richard F. Russo
  • Publication number: 20240354109
    Abstract: Disclosed techniques relate to re-use of speculative results from an incorrect execution path. In some embodiments, when a control transfer instruction is mispredicted, a load instruction may have been executed on the wrong path. In disclosed embodiments, result storage circuitry records information that indicates destination registers of speculatively-executed load instructions including a first load instruction. Control flow tracker circuitry may store information indicating a reconvergence point for the control transfer instruction.
    Type: Application
    Filed: April 21, 2023
    Publication date: October 24, 2024
    Inventors: Yuan C. Chou, Deepankar Duggal, Debasish Chandra, Niket K. Choudhary, Richard F. Russo
  • Patent number: 12067398
    Abstract: Techniques are disclosed relating to load value prediction. In some embodiments, a processor includes learning table circuitry that is shared for both address and value prediction. Loads may be trained for value prediction when they are eligible for both value and address prediction. Entries in the learning table may be promoted to an address prediction table or a load value prediction table for prediction, e.g., when they reach a threshold confidence level in the training table. In some embodiments, the learning table stores a hash of a predicted load value and control circuitry uses a probing load to retrieve the actual predicted load value for the value prediction table.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: August 20, 2024
    Assignee: Apple Inc.
    Inventors: Yuan C. Chou, Debasish Chandra, Mridul Agarwal, Haoyan Jia
  • Patent number: 10108548
    Abstract: In one aspect, a processor has a register file, a private Level 1 (L1) cache, and an interface to a shared memory hierarchy (e.g., an Level 2 (L2) cache and so on). The processor has a Load Store Unit (LSU) that handles decoded load and store instructions. The processor may support out of order and multi-threaded execution. As store instructions arrive at the LSU for processing, the LSU determines whether a counter, from a set of counters, is allocated to a cache line affected by each store. If not, the LSU allocates a counter. If so, then the LSU updates the counter. Also, in response to a store instruction, affecting a cache line neighboring a cache line that has a counter that meets a criteria, the LSU characterizes that store instruction as one to be effected without obtaining ownership of the effected cache line, and provides that store to be serviced by an element of the shared memory hierarchy.
    Type: Grant
    Filed: August 18, 2015
    Date of Patent: October 23, 2018
    Assignee: MIPS Tech, LLC
    Inventors: Ranjit J. Rozario, Era Nangia, Debasish Chandra, Ranganathan Sudhakar
  • Patent number: 9940168
    Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
    Type: Grant
    Filed: January 12, 2017
    Date of Patent: April 10, 2018
    Assignee: MIPS Tech, LLC
    Inventor: Debasish Chandra
  • Publication number: 20170123853
    Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
    Type: Application
    Filed: January 12, 2017
    Publication date: May 4, 2017
    Inventor: Debasish CHANDRA
  • Patent number: 9563476
    Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: February 7, 2017
    Assignee: Imagination Technologies, LLC
    Inventor: Debasish Chandra
  • Publication number: 20160055083
    Abstract: In one aspect, a processor has a register file, a private Level 1 (L1) cache, and an interface to a shared memory hierarchy (e.g., an Level 2 (L2) cache and so on). The processor has a Load Store Unit (LSU) that handles decoded load and store instructions. The processor may support out of order and multi-threaded execution. As store instructions arrive at the LSU for processing, the LSU determines whether a counter, from a set of counters, is allocated to a cache line affected by each store. If not, the LSU allocates a counter. If so, then the LSU updates the counter. Also, in response to a store instruction, affecting a cache line neighboring a cache line that has a counter that meets a criteria, the LSU characterizes that store instruction as one to be effected without obtaining ownership of the effected cache line, and provides that store to be serviced by an element of the shared memory hierarchy.
    Type: Application
    Filed: August 18, 2015
    Publication date: February 25, 2016
    Inventors: Ranjit J. Rozario, Era Nangia, Debasish Chandra, Ranganathan Sudhakar
  • Publication number: 20150370605
    Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
    Type: Application
    Filed: August 27, 2015
    Publication date: December 24, 2015
    Inventor: Debasish CHANDRA
  • Patent number: 9135067
    Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
    Type: Grant
    Filed: February 28, 2013
    Date of Patent: September 15, 2015
    Assignee: MIPS Technologies, Inc.
    Inventor: Debasish Chandra
  • Publication number: 20140258697
    Abstract: A processor includes a multiple stage pipeline with a scheduler with a wakeup block and select logic. The wakeup block is configured to wake, in a first cycle, all instructions dependent upon a first selected instruction to form a wake instruction set. In a second cycle, the wakeup block wakes instructions dependent upon the wake instruction set to augment the wake instruction set. The select logic selects instructions from the wake instruction set based upon program order.
    Type: Application
    Filed: March 7, 2013
    Publication date: September 11, 2014
    Applicant: MIPS TECHNOLOGIES, INC.
    Inventors: Ranganathan Sudhakar, Debasish Chandra, Qian Wang
  • Publication number: 20140245317
    Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
    Type: Application
    Filed: February 28, 2013
    Publication date: August 28, 2014
    Applicant: MIPS Technologies, Inc.
    Inventor: Debasish CHANDRA
  • Patent number: 8627044
    Abstract: The described embodiments include a processor that determines instructions that can be issued based on unresolved data dependencies. In an issue unit in the processor, the processor keeps a record of each instruction that is directly or indirectly dependent on a base instruction. Upon determining that the base instruction has been deferred, the processor monitors instructions that are being issued from an issue queue to an execution unit for execution. Upon determining that an instruction from the record has reached a head of the issue queue, the processor immediately issues the instruction from the issue queue.
    Type: Grant
    Filed: October 6, 2010
    Date of Patent: January 7, 2014
    Assignee: Oracle International Corporation
    Inventors: Shailender Chaudhry, Richard Thuy Van, Robert E. Cypher, Debasish Chandra
  • Publication number: 20120089819
    Abstract: The described embodiments include a processor that determines instructions that can be issued based on unresolved data dependencies. In an issue unit in the processor, the processor keeps a record of each instruction that is directly or indirectly dependent on a base instruction. Upon determining that the base instruction has been deferred, the processor monitors instructions that are being issued from an issue queue to an execution unit for execution. Upon determining that an instruction from the record has reached a head of the issue queue, the processor immediately issues the instruction from the issue queue.
    Type: Application
    Filed: October 6, 2010
    Publication date: April 12, 2012
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Shailender Chaudhry, Richard Thuy Van, Robert E. Cypher, Debasish Chandra