Patents by Inventor John Gregory Favor

John Gregory Favor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11822487
    Abstract: A memory management unit (MMU) including a unified translation lookaside buffer (TLB) supporting a plurality of page sizes is disclosed. In one aspect, the MMU is further configured to store and dynamically update page size residency metadata associated with each of the plurality of page sizes. The page size residency metadata may include most recently used (MRU) page size data and/or a counter for each page size indicating how many pages of that page size are resident in the unified TLB. The unified TLB is configured to determine an order in which to perform a TLB lookup for at least a subset of page sizes of the plurality of page sizes based on the page size residency metadata.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: November 21, 2023
    Assignee: Ampere Computing LLC
    Inventors: George Van Horn Leming, III, John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Bret Leslie Toll
  • Patent number: 11386016
    Abstract: A memory management unit (MMU) including a unified translation lookaside buffer (TLB) supporting a plurality of page sizes is disclosed. In one aspect, the MMU is further configured to store and dynamically update page size residency metadata associated with each of the plurality of page sizes. The page size residency metadata may include most recently used (MRU) page size data and/or a counter for each page size indicating how many pages of that page size are resident in the unified TLB. The unified TLB is configured to determine an order in which to perform a TLB lookup for at least a subset of page sizes of the plurality of page sizes based on the page size residency metadata.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: July 12, 2022
    Assignee: Ampere Computing LLC
    Inventors: George Van Horn Leming, III, John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Bret Leslie Toll
  • Publication number: 20220206845
    Abstract: Aspects disclosed in the detailed description include multi-level instruction scheduling in a processor. Related methods and systems are also disclosed. In one exemplary aspect, an apparatus is provided that comprises a scheduler circuit comprising a scheduling group circuit, a first selection circuit, and a second selection circuit. The scheduling group circuit comprising a plurality of groups of scheduling entries, each scheduling entry among the groups of scheduling entries each comprising an instruction portion and a ready portion, each group configured to have its scheduling entries written in-order. The scheduling group circuit is further configured to maintain group age information associated with each group of the plurality of groups. The first selection circuit is configured to select a first in-order ready entry from each group. The second selection circuit is configured to select the first in-order ready entry belonging to the oldest group based on the group age information for scheduling.
    Type: Application
    Filed: December 31, 2020
    Publication date: June 30, 2022
    Inventors: Sean Philip Mirkes, John Gregory Favor
  • Publication number: 20220091997
    Abstract: A memory management unit (MMU) including a unified translation lookaside buffer (TLB) supporting a plurality of page sizes is disclosed. In one aspect, the MMU is further configured to store and dynamically update page size residency metadata associated with each of the plurality of page sizes. The page size residency metadata may include most recently used (MRU) page size data and/or a counter for each page size indicating how many pages of that page size are resident in the unified TLB.
    Type: Application
    Filed: December 1, 2021
    Publication date: March 24, 2022
    Inventors: George Van Horn Leming, III, John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Bret Leslie Toll
  • Publication number: 20220004501
    Abstract: An apparatus configured to provide just-in-time synonym handling, and related systems, methods, and computer-readable media, are disclosed. The apparatus includes a first cache comprising a translation lookaside buffer (TLB) and a hit/miss block. The first cache is configured to form a miss request associated with an access to the first cache and provide the miss request to a second cache. The miss request comprises a physical address provided by the TLB and miss information provided by the hit/miss block. The first cache is further configured to receive, from the second cache, previously-stored metadata associated with an entry in the second cache. The entry in the second cache is associated with the miss request.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 6, 2022
    Inventors: John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Kjeld Svendsen, Bret Leslie Toll
  • Publication number: 20210191877
    Abstract: A memory management unit (MMU) including a unified translation lookaside buffer (TLB) supporting a plurality of page sizes is disclosed. In one aspect, the MMU is further configured to store and dynamically update page size residency metadata associated with each of the plurality of page sizes. The page size residency metadata may include most recently used (MRU) page size data and/or a counter for each page size indicating how many pages of that page size are resident in the unified TLB.
    Type: Application
    Filed: December 20, 2019
    Publication date: June 24, 2021
    Inventors: George Van Horn Leming, III, John Gregory Favor, Stephan Jean Jourdan, Jonathan Christopher Perry, Bret Leslie Toll
  • Patent number: 10372615
    Abstract: Various aspects provide for managing data associated with a cache memory. For example, a system can include a cache memory and a memory controller. The cache memory stores data. The memory controller maintains a history profile for the data stored in the cache memory. In an implementation, the memory controller includes a filter component, a tagging component and a data management component. The filter component determines whether the data is previously stored in the cache memory based on a filter associated with a probabilistic data structure. The tagging component tags the data as recurrent data in response to a determination by the filter component that the data is previously stored in the cache memory. The data management component retains the data in the cache memory in response to the tagging of the data as the recurrent data.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: August 6, 2019
    Assignee: AMPERE COMPUTING LLC
    Inventors: Kjeld Svendsen, John Gregory Favor
  • Patent number: 10348281
    Abstract: Various aspects provide for mitigating voltage droop associated with a microprocessor (e.g., by controlling a clock associated with the microprocessor). For example, a system can include a microprocessor and a controller. The microprocessor can receive a clock provided by a clock buffer. The controller can control frequency of the clock provided by the clock buffer based on a voltage associated with the microprocessor. In an aspect, the controller can reduce the frequency of the clock in response to a determination that the voltage satisfies a defined criterion. Additionally, the controller can incrementally increase the frequency of the clock in response to another determination that the voltage satisfies another defined criterion after satisfying the defined criterion.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: July 9, 2019
    Assignee: AMPERE COMPUTING LLC
    Inventors: David S. Oliver, Matthew W. Ashcraft, Luca Ravezzi, Alfred Yeung, John Gregory Favor
  • Patent number: 9798672
    Abstract: Various aspects provide for managing data associated with a cache memory. For example, a system can include a cache memory and a memory controller. The cache memory stores data. The memory controller maintains a history profile for the data stored in the cache memory. In an implementation, the memory controller includes a filter component, a tagging component and a data management component. The filter component determines whether the data is previously stored in the cache memory based on a filter associated with a probabilistic data structure. The tagging component tags the data as recurrent data in response to a determination by the filter component that the data is previously stored in the cache memory. The data management component retains the data in the cache memory in response to the tagging of the data as the recurrent data.
    Type: Grant
    Filed: April 14, 2016
    Date of Patent: October 24, 2017
    Assignee: MACOM CONNECTIVITY SOLUTIONS, LLC
    Inventors: Kjeld Svendsen, John Gregory Favor
  • Patent number: 9280479
    Abstract: A memory system having increased throughput is disclosed. Specifically, the memory system includes a first level write combining queue that reduces the number of data transfers between a level one cache and a level two cache. In addition, a second level write merging buffer can further reduce the number of data transfers within the memory system. The first level write combining queue receives data from the level one cache. The second level write merging buffer receives data from the first level write combining queue. The level two cache receives data from both the first level write combining queue and the second level write merging buffer. Specifically, the first level write combining queue combines multiple store transactions from the load store units to associated addresses. In addition, the second level write merging buffer merges data from the first level write combining queue.
    Type: Grant
    Filed: May 22, 2012
    Date of Patent: March 8, 2016
    Assignee: Applied Micro Circuits Corporation
    Inventors: David A. Kruckemyer, John Gregory Favor, Matthew W. Ashcraft
  • Patent number: 9213643
    Abstract: Various aspects provide for implementing a cache coherence protocol. A system comprises at least one processing component and a centralized controller. The at least one processing component comprises a cache controller. The cache controller is configured to manage a cache memory associated with a processor. The centralized controller is configured to communicate with the cache controller based on a power state of the processor.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: December 15, 2015
    Assignee: Applied Micro Circuits Corporation
    Inventors: David Alan Kruckemyer, John Gregory Favor
  • Patent number: 9058284
    Abstract: Method and apparatus for performing table lookup are disclosed. In one embodiment, the method includes providing a lookup table, where the lookup table includes a plurality of translation modes and each translation mode includes a corresponding translation table tree supporting a plurality of page sizes. The method further includes receiving a search request from a requester, determining a translation table tree for conducting the search request, determining a lookup sequence based on the translation table tree, generating a search output using the lookup sequence, and transmitting the search output to the requester. The plurality of translation modes includes a first set of page sizes for 32-bit operating system software and a second set of page sizes for 64-bit operating system software. The plurality of page sizes includes non-global pages, global pages, and both non-global and global pages.
    Type: Grant
    Filed: March 16, 2012
    Date of Patent: June 16, 2015
    Assignee: Applied Micro Circuits Corporation
    Inventors: Amos Ben-Meir, John Gregory Favor
  • Patent number: 8949581
    Abstract: A load scheduler capable of limited issuing of out of order load instruction is disclosed. The load scheduler uses a max skipping threshold which limits the number of skipping load instructions and a max skipped threshold which limits the number of skipped load instructions. An address tag for a skipping instruction is stored in a skipping load instruction tracking unit when a skipping load instruction is issued. When a skipped load instruction issues, the address tag of the skipped load instruction is compared to the address tag of the skipping instruction to determine if a hazard from the out of order issuing of the skipping load instruction caused a hazard and must be flushed.
    Type: Grant
    Filed: May 9, 2011
    Date of Patent: February 3, 2015
    Assignee: Applied Micro Circuits Corporation
    Inventors: Matthew W. Ashcraft, John Gregory Favor
  • Patent number: 8850121
    Abstract: A load/store unit with an outstanding load miss buffer and a load miss result buffer is configured to read data from a memory system having a level one cache. Missed load instructions are stored in the outstanding load miss buffer. The load/store unit retrieves data for multiple dependent missed load instructions using a single cache access and stores the data in the load miss result buffer. The outstanding load miss buffer stores a first missed load instruction in a first primary entry. Additional missed load instructions that are dependent on the first missed load instructions are stored in dependent entries of the first primary entry or in shared entries. If a shared entry is used for a missed load instruction the shared entry is associated with the primary entry.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: September 30, 2014
    Assignee: Applied Micro Circuits Corporation
    Inventors: Matthew W. Ashcraft, John Gregory Favor, David A. Kruckemyer
  • Publication number: 20140281275
    Abstract: Various aspects provide for implementing a cache coherence protocol. A system comprises at least one processing component and a centralized controller. The at least one processing component comprises a cache controller. The cache controller is configured to manage a cache memory associated with a processor. The centralized controller is configured to communicate with the cache controller based on a power state of the processor.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Applicant: APPLIED MICRO CIRCUITS CORPORATION
    Inventors: David Alan Kruckemyer, John Gregory Favor
  • Patent number: 8806135
    Abstract: A load/store unit with an outstanding load miss buffer and a load miss result buffer is configured to read data from a memory system having a level one cache. Missed load instructions are stored in the outstanding load miss buffer. The load/store unit retrieves data for multiple dependent missed load instructions using a single cache access and stores the data in the load miss result buffer. When missed load instructions are reissued from the outstanding load miss buffer, data for the missed load instructions are read from the load miss result buffer rather than the level one cache. Because the data is stored in the load miss result buffer, other instructions that may change the data in level one cache do not cause data hazards with the missed load instructions.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: August 12, 2014
    Assignee: Applied Micro Circuits Corporation
    Inventors: Matthew W. Ashcraft, John Gregory Favor, David A. Kruckemyer
  • Patent number: 8793435
    Abstract: A load/store unit with an outstanding load miss buffer and a load miss result buffer is configured to read data from a memory system having a level one cache. Missed load instructions are stored in the outstanding load miss buffer. The load/store unit retrieves data for multiple dependent missed load instructions using a single memory access and stores the data in the load miss result buffer. The load miss result buffer includes dependent data lines, dependent data selection circuits, shared data lines and shared data selection circuits. The dependent data selection circuits are configured to select a subset of data from the memory system for storing in an associated dependent data line. Similarly, the shared data selection circuits are configured to select a subset of data from the memory system for storing in an associated shared data line.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: July 29, 2014
    Assignee: Applied Micro Circuits Corporation
    Inventors: Matthew W. Ashcraft, John Gregory Favor, David A. Kruckemyer
  • Patent number: 8543843
    Abstract: A virtual core management system including one or more physical cores, a virtual core including a collection of logical states associated with the execution of a program, and a virtual core management component configured to map the virtual core to one of the one or more physical cores based upon power management considerations.
    Type: Grant
    Filed: October 31, 2007
    Date of Patent: September 24, 2013
    Assignees: Sun Microsystems, Inc., Sun Microsystems Technology Ltd.
    Inventors: Yu Qing Cheng, John Gregory Favor, Peter N. Glaskowsky, Carlos Puchol, Seungyoon Peter Song
  • Patent number: 8499293
    Abstract: A method and apparatus for optimizing a sequence of operations adapted for execution by a processor is disclosed to include associating with each register a symbolic expression selected from a set of possible symbolic expressions, locating an operation, if any, that is next within the sequence of operations and setting that operation to be a working operation, where the working operation has associated therewith a destination register and zero or more source registers, and processing the working operation when the working operation and any symbolic expressions of its source registers, if any, match at least one of a set of rules, where each rule specifies that the working operation must match a subset of the operation set, where each rule also specifies that the symbolic expressions, if any, of any source registers of the working operation must match a subset of the possible symbolic expressions, and where the rule also specifies a result, then posting the result as the symbolic expression of the destination
    Type: Grant
    Filed: November 16, 2007
    Date of Patent: July 30, 2013
    Assignee: Oracle America, Inc.
    Inventors: Matthew William Ashcraft, John Gregory Favor, Christopher Patrick Nelson, Ivan Pavle Radivojevic, Joseph Byron Rowlands, Richard Win Thaik
  • Patent number: 8370576
    Abstract: An embodiment of the present invention includes a circuit for tracking memory operations with trace-based execution. Each trace includes a sequence of operations that includes zero or more of the memory operations. At least some of the active memory operations access the memory in an execution order that is different from the program order. The circuit includes a first memory that caches data accessed by the memory operations. This memory is partitioned into N banks. Checkpoint entries, which are stored in a second memory also partitioned into N banks, are associated with each trace. Each entry refers to a checkpoint location in the first memory. A sub-circuit receives rollback requests and responds by overwriting checkpoint locations. Each of the N memory units consisting of a bank in the first memory and the corresponding bank in the second memory may be rolled back independently and concurrently with other memory units.
    Type: Grant
    Filed: February 13, 2008
    Date of Patent: February 5, 2013
    Assignee: Oracle America, Inc.
    Inventors: John Gregory Favor, Paul G. Chan, Graham Ricketson Murphy, Joseph Byron Rowlands