Patents by Inventor Ramkumar Jayaseelan

Ramkumar Jayaseelan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10223124
    Abstract: A processor employs one or more branch predictors to issue branch predictions for each thread executing at an instruction pipeline. Based on the branch predictions, the processor determines a branch prediction confidence for each of the executing threads, whereby a lower confidence level indicates a smaller likelihood that the corresponding thread will actually take the predicted branch. Because speculative execution of an untaken branch wastes resources of the instruction pipeline, the processor prioritizes threads associated with a higher confidence level for selection at the stages of the instruction pipeline.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: March 5, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ramkumar Jayaseelan, Ravindra N Bhargava
  • Publication number: 20180039518
    Abstract: In an illustrative example, a system includes a resource and a first processor. The first processor is configured to access the resource based on a first physical address space and to generate a request for access to the resource. The request has a first format. The system further includes a second processor configured to access the resource based on a second physical address space. The system also includes a device coupled to the resource and to the first processor. The device is configured to receive the request, to generate a message having a second format based on the request, to send the message to the resource, and to provide a reply to the request to the first processor.
    Type: Application
    Filed: August 2, 2016
    Publication date: February 8, 2018
    Inventors: Ramkumar Jayaseelan, Sadagopan Srinivasan, Thomas Andrew Hartin
  • Publication number: 20170337084
    Abstract: An apparatus includes a set of one or more processing cores, a thread dispatcher, and an event register of a first compute unit. The set of one or more processing cores is configured to execute a set of threads. The thread dispatcher is coupled to the set of one or more processing cores and is configured to select threads of the set of threads for execution by the set of one or more processing cores. The thread dispatcher is further configured to refrain from selecting a first thread of the set of threads for execution in response to a first value of one or more bits of the event register and to select the first thread for execution in response to a second value of the one or more bits.
    Type: Application
    Filed: May 18, 2016
    Publication date: November 23, 2017
    Inventors: Ramkumar Jayaseelan, Raghuram S. Tupuri, Sadagopan Srinivasan, Thomas Andrew Hartin
  • Publication number: 20160077565
    Abstract: A processing device includes one or more queues to convey data between a producing processor unit in a first timing domain and a consuming processor unit in a second timing domain that is asynchronous with the first timing domain. A system management unit configures a first operating frequency of the producing processor unit and a second operating frequency of the consuming processor unit based on a power constraint for the processing device and a target size of the one or more queues.
    Type: Application
    Filed: September 17, 2014
    Publication date: March 17, 2016
    Inventor: Ramkumar Jayaseelan
  • Patent number: 9223705
    Abstract: A processor employs a prefetch prediction module that predicts, for each prefetch request, whether the prefetch request is likely to be satisfied from (“hit”) the cache. The arbitration priority of prefetch requests that are predicted to hit the cache is reduced relative to demand requests or other prefetch requests that are predicted to miss in the cache. Accordingly, an arbiter for the cache is less likely to select prefetch requests that hit the cache, thereby improving processor throughput.
    Type: Grant
    Filed: April 1, 2013
    Date of Patent: December 29, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ramkumar Jayaseelan, John Kalamatianos
  • Patent number: 8909866
    Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.
    Type: Grant
    Filed: November 6, 2012
    Date of Patent: December 9, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Ravindra Nath Bhargava, Ramkumar Jayaseelan
  • Publication number: 20140297965
    Abstract: A processor employs a prefetch prediction module that predicts, for each prefetch request, whether the prefetch request is likely to be satisfied from (“hit”) the cache. The arbitration priority of prefetch requests that are predicted to hit the cache is reduced relative to demand requests or other prefetch requests that are predicted to miss in the cache. Accordingly, an arbiter for the cache is less likely to select prefetch requests that hit the cache, thereby improving processor throughput.
    Type: Application
    Filed: April 1, 2013
    Publication date: October 2, 2014
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Ramkumar Jayaseelan, John Kalamatianos
  • Publication number: 20140297996
    Abstract: A processor includes storage elements to store a first and second value, as well as a plurality of hash units coupled to the storage elements. Each hash unit performs a hash operation using the first value and the second value to generate a corresponding hash result value. The processor further includes selection logic to select a hash result value from the hash result values generated by the plurality of hash units responsive to a selection input generated from another hash operation performed using the first value and the second value. A method includes predicting whether a branch instruction is taken based on a prediction value stored at an entry of a branch prediction table indexed by an index value selected from a plurality of values concurrently generated from an address value of the branch instruction and a branch history value representing a history of branch directions at the processor.
    Type: Application
    Filed: April 1, 2013
    Publication date: October 2, 2014
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Ramkumar Jayaseelan
  • Publication number: 20140201507
    Abstract: A processor employs one or more branch predictors to issue branch predictions for each thread executing at an instruction pipeline. Based on the branch predictions, the processor determines a branch prediction confidence for each of the executing threads, whereby a lower confidence level indicates a smaller likelihood that the corresponding thread will actually take the predicted branch. Because speculative execution of an untaken branch wastes resources of the instruction pipeline, the processor prioritizes threads associated with a higher confidence level for selection at the stages of the instruction pipeline.
    Type: Application
    Filed: January 11, 2013
    Publication date: July 17, 2014
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Ramkumar Jayaseelan, Ravindra N. Bhargava
  • Publication number: 20140129772
    Abstract: A processor transfers prefetch requests from their targeted cache to another cache in a memory hierarchy based on a fullness of a miss address buffer (MAB) or based on confidence levels of the prefetch requests. Each cache in the memory hierarchy is assigned a number of slots at the MAB. In response to determining the fullness of the slots assigned to a cache is above a threshold when a prefetch request to the cache is received, the processor transfers the prefetch request to the next lower level cache in the memory hierarchy. In response, the data targeted by the access request is prefetched to the next lower level cache in the memory hierarchy, and is therefore available for subsequent provision to the cache. In addition, the processor can transfer a prefetch request to lower level caches based on a confidence level of a prefetch request.
    Type: Application
    Filed: November 6, 2012
    Publication date: May 8, 2014
    Applicant: Advanced Micro Devices, Inc.
    Inventors: John Kalamatianos, Ravindra Nath Bhargava, Ramkumar Jayaseelan