Patents by Inventor Aneesh Aggarwal

Aneesh Aggarwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104022
    Abstract: An example of an apparatus may include a first cache organized as two or more portions, a second cache, and circuitry coupled to the first cache and the second cache to determine a designated portion allocation for data transferred from the first cache to the second cache, and track the designated portion allocation for the data transferred from the first cache to the second cache. Other examples are disclosed and claimed.
    Type: Application
    Filed: September 27, 2022
    Publication date: March 28, 2024
    Applicant: Intel Corporation
    Inventors: Aneesh Aggarwal, Georgii Tkachuk, Subhiksha Ravisundar, Youngsoo Choi, Niall McDonnell
  • Patent number: 9552032
    Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes instruction memory and a branch prediction unit. The branch prediction unit is configured to use information from the instruction memory to selectively power up the branch prediction unit from a powered-down state when fetched instruction data includes a branch instruction and maintain the branch prediction unit in the powered-down state when the fetched instruction data does not include a branch instruction in order to reduce power consumption of the microprocessor during instruction fetch operations.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: January 24, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck, Paul Wasson
  • Patent number: 9547358
    Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes a branch prediction unit. The branch prediction unit is configured to track the presence of branches in instruction data that is fetched from an instruction memory after a redirection at a target of a predicted taken branch. The branch prediction unit is selectively powered up from a powered-down state when the fetched instruction data includes a branch instruction and is maintained in the powered-down state when the fetched instruction data does not include an instruction branch in order to reduce power consumption of the microprocessor during instruction fetch operations.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: January 17, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Paul Wasson
  • Patent number: 9396117
    Abstract: In one embodiment, a method for controlling an instruction cache including a least-recently-used bits array, a tag array, and a data array, includes looking up, in the least-recently-used bits array, least-recently-used bits for each of a plurality of cacheline sets in the instruction cache, determining a most-recently-used way in a designated cacheline set of the plurality of cacheline sets based on the least-recently-used bits for the designated cacheline, looking up, in the tag array, tags for one or more ways in the designated cacheline set, looking up, in the data array, data stored in the most-recently-used way in the designated cacheline set, and if there is a cache hit in the most-recently-used way, retrieving the data stored in the most-recently-used way from the data array.
    Type: Grant
    Filed: January 9, 2012
    Date of Patent: July 19, 2016
    Assignee: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck
  • Patent number: 9378127
    Abstract: Mechanisms for predicting whether a memory access may be a page hit or a page miss and applying different page policies (e.g., an open page policy or a close page policy) based on the prediction are disclosed. A counter may be used to determine a hit rate (e.g., a percentage or a ratio of the number of memory accesses that are page hits). The processing device may apply different page policies based on the hit rate. A memory access history (that includes data indicating a sequence or list of memory accesses) may be used to identify a counter from a plurality of counters. The processing device may apply different page policies based on the value of the counter (e.g., based on whether the counter is greater than a threshold).
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: June 28, 2016
    Assignee: Intel Corporation
    Inventors: Aneesh Aggarwal, Tameesh Suri
  • Publication number: 20140379987
    Abstract: Mechanisms for predicting whether a memory access may be a page hit or a page miss and applying different page policies (e.g., an open page policy or a close page policy) based on the prediction are disclosed. A counter may be used to determine a hit rate (e.g., a percentage or a ratio of the number of memory accesses that are page hits). The processing device may apply different page policies based on the hit rate. A memory access history (that includes data indicating a sequence or list of memory accesses) may be used to identify a counter from a plurality of counters. The processing device may apply different page policies based on the value of the counter (e.g., based on whether the counter is greater than a threshold).
    Type: Application
    Filed: June 21, 2013
    Publication date: December 25, 2014
    Inventors: Aneesh Aggarwal, Tameesh Suri
  • Publication number: 20130290676
    Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes a branch prediction unit. The branch prediction unit is configured to track the presence of branches in instruction data that is fetched from an instruction memory after a redirection at a target of a predicted taken branch. The branch prediction unit is selectively powered up from a powered-down state when the fetched instruction data includes a branch instruction and is maintained in the powered-down state when the fetched instruction data does not include an instruction branch in order to reduce power consumption of the microprocessor during instruction fetch operations.
    Type: Application
    Filed: April 27, 2012
    Publication date: October 31, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Paul Wasson
  • Publication number: 20130290640
    Abstract: In one embodiment, a microprocessor is provided. The microprocessor includes instruction memory and a branch prediction unit. The branch prediction unit is configured to use information from the instruction memory to selectively power up the branch prediction unit from a powered-down state when fetched instruction data includes a branch instruction and maintain the branch prediction unit in the powered-down state when the fetched instruction data does not include a branch instruction in order to reduce power consumption of the microprocessor during instruction fetch operations.
    Type: Application
    Filed: April 27, 2012
    Publication date: October 31, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck, Paul Wasson
  • Publication number: 20130179640
    Abstract: In one embodiment, a method for controlling an instruction cache including a least-recently-used bits array, a tag array, and a data array, includes looking up, in the least-recently-used bits array, least-recently-used bits for each of a plurality of cacheline sets in the instruction cache, determining a most-recently-used way in a designated cacheline set of the plurality of cacheline sets based on the least-recently-used bits for the designated cacheline, looking up, in the tag array, tags for one or more ways in the designated cacheline set, looking up, in the data array, data stored in the most-recently-used way in the designated cacheline set, and if there is a cache hit in the most-recently-used way, retrieving the data stored in the most-recently-used way from the data array.
    Type: Application
    Filed: January 9, 2012
    Publication date: July 11, 2013
    Applicant: NVIDIA CORPORATION
    Inventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck
  • Publication number: 20030237079
    Abstract: A system and method that establishes a list of one or more possible field pairs, which comprise an array field and an integer field of an object included in a computer program. A portion of the computer program is then scanned for references to possible field pairs included in the list. Each possible field pair corresponding to an invalid combination of references is removed from the list. An invalid combination of references precludes confirmation of an invariant relationship of a given possible field pair. The field pairs remaining on the list after this removal process are considered actual field pairs. Next, the invariant relationship of the field pairs remaining on the list is confirmed. Machine code is then generated for the computer program such that array bounds checks corresponding to a given field pair is not included in the machine code if the invariant relationship is confirmed.
    Type: Application
    Filed: January 31, 2003
    Publication date: December 25, 2003
    Inventors: Aneesh Aggarwal, Keith H. Randall