Patents by Inventor DEREK ROBERT HOWER

DEREK ROBERT HOWER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240061612
    Abstract: Disclosed are techniques for processing uncommitted writes in a store queue. In an aspect, an apparatus comprises a processor and a dual store queue having an in-order queue (IOQ) for storing uncommitted writes and an uncommitted data gather queue (UGQ) for gathering uncommitted data. The dual store queue receives, from a processor, a first write instruction for writing first data to at least a portion of memory at a first memory address, allocates an IOQ entry corresponding to the first write instruction, and allocates or updates a UGQ entry associated with the first memory address to contain the first data.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Cerine Marguerite HILL, Derek Robert HOWER
  • Patent number: 11243772
    Abstract: Certain aspects of the present disclosure provide techniques for training load value predictors, comprising: determining if a prediction has been made by one or more of a plurality of load value predictors; determining a misprediction has been made by one or more load value predictors of the plurality of load value predictors; training each of the one or more load value predictors that made the misprediction; and resetting a confidence value associated with each of the one or more load value predictors that made the misprediction.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: February 8, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Rami Mohammad A. Al Sheikh, Derek Robert Hower
  • Publication number: 20200364055
    Abstract: Certain aspects of the present disclosure provide techniques for training load value predictors, comprising: determining if a prediction has been made by one or more of a plurality of load value predictors; determining a misprediction has been made by one or more load value predictors of the plurality of load value predictors; training each of the one or more load value predictors that made the misprediction; and resetting a confidence value associated with each of the one or more load value predictors that made the misprediction.
    Type: Application
    Filed: May 16, 2019
    Publication date: November 19, 2020
    Inventors: Rami Mohammad A. AL SHEIKH, Derek Robert HOWER
  • Publication number: 20200356372
    Abstract: Providing early instruction execution in a processor, and related apparatuses, methods, and computer-readable media are disclosed. In one aspect, an apparatus comprises an early execution engine communicatively coupled to a front-end instruction pipeline and a local register file. The apparatus may be configured to use value prediction wherein all input values of the instructions are actually available early in the pipeline (in the front-end), even before the value producers have executed, thus, providing an opportunity for early executing such instructions, and avoid sending these early executed instructions to the power hungry out-of-order engine that may improve performance as well as energy efficiency. In other aspects, the front-end of the pipeline is augmented with a component for early executing simple operations (e.g.
    Type: Application
    Filed: May 8, 2019
    Publication date: November 12, 2020
    Inventors: Rami Mohammad A. AL SHEIKH, Derek Robert HOWER
  • Patent number: 10831254
    Abstract: Allocating power between multiple central processing units (CPUs) in a multi-CPU processor based on total current availability and individual CPU quality-of-service (QoS) requirements is disclosed. Current from a power rail is allocated to CPUs by a global current manger (GCM) circuit related to performance criteria set by CPUs. The CPUs can request increased current allocation from the GCM circuit, such as in response to executing a higher performance task. If the increased current allocation request keeps total current on the power rail within its maximum rail current limit, the GCM circuit approves the request to allow the CPU increased current allocation. This can allow CPUs executing higher performance tasks to have a larger current allocation than CPUs executing lower performance tasks without the maximum rail current limit being exceeded, and without having to necessarily lower voltage of the power rail, which could unnecessarily lower performance of all CPUs.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: November 10, 2020
    Assignee: Qualcomm Incorporated
    Inventors: Shivam Priyadarshi, SeyedMajid Zahedi, Derek Robert Hower, Carl Alan Waldspurger, Jeffrey Todd Bridges, Sanjay Bhikhubhai Patel, Gabriel Martel Tarr, Chih Kang Lin, Ryan Donovan Wells, Harold Wade Cain, III
  • Patent number: 10678690
    Abstract: Providing fine-grained Quality of Service (QoS) control using interpolation for partitioned resources in processor-based systems is disclosed. In this regard, in one aspect, a processor-based system provides a partitioned resource (such as a system cache or memory access bandwidth to a shared system memory) that is subdivided into a plurality of partitions, and that is configured to service a plurality of resource clients. A resource allocation agent of the processor-based system provides a plurality of allocation indicators corresponding to each combination of resource client and partition, and indicating an allocation of each partition for each resource client. The resource allocation agent allocates the partitioned resource among the resource clients based on an interpolation of the plurality of allocation indicators.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: June 9, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Derek Robert Hower, Carl Alan Waldspurger, Vikramjit Sethi
  • Publication number: 20190086982
    Abstract: Allocating power between multiple central processing units (CPUs) in a multi-CPU processor based on total current availability and individual CPU quality-of-service (QoS) requirements is disclosed. Current from a power rail is allocated to CPUs by a global current manger (GCM) circuit related to performance criteria set by CPUs. The CPUs can request increased current allocation from the GCM circuit, such as in response to executing a higher performance task. If the increased current allocation request keeps total current on the power rail within its maximum rail current limit, the GCM circuit approves the request to allow the CPU increased current allocation. This can allow CPUs executing higher performance tasks to have a larger current allocation than CPUs executing lower performance tasks without the maximum rail current limit being exceeded, and without having to necessarily lower voltage of the power rail, which could unnecessarily lower performance of all CPUs.
    Type: Application
    Filed: September 12, 2018
    Publication date: March 21, 2019
    Inventors: Shivam Priyadarshi, SeyedMajid Zahedi, Derek Robert Hower, Carl Alan Waldspurger, Jeffrey Todd Bridges, Sanjay Bhikhubhai Patel, Gabriel Martel Tarr, Chih Kang Lin, Ryan Donovan Wells, Harold Wade Cain, III
  • Publication number: 20190065374
    Abstract: Providing fine-grained Quality of Service (QoS) control using interpolation for partitioned resources in processor-based systems is disclosed. In this regard, in one aspect, a processor-based system provides a partitioned resource (such as a system cache or memory access bandwidth to a shared system memory) that is subdivided into a plurality of partitions, and that is configured to service a plurality of resource clients. A resource allocation agent of the processor-based system provides a plurality of allocation indicators corresponding to each combination of resource client and partition, and indicating an allocation of each partition for each resource client. The resource allocation agent allocates the partitioned resource among the resource clients based on an interpolation of the plurality of allocation indicators.
    Type: Application
    Filed: August 29, 2017
    Publication date: February 28, 2019
    Inventors: Derek Robert Hower, Carl Alan Waldspurger, Vikramjit Sethi
  • Patent number: 9697126
    Abstract: Generating approximate usage measurements for shared cache memory systems is disclosed. In one aspect, a cache memory system is provided. The cache memory system comprises a shared cache memory system. A subset of the shared cache memory system comprises a Quality of Service identifier (QoSID) tracking tag configured to store a QoSID tracking indicator for a QoS class. The shared cache memory system further comprises a cache controller configured to receive a memory access request comprising a QoSID, and is configured to access a cache line corresponding to the memory access request. The cache controller is also configured to determine whether the QoSID of the memory access request corresponds to a cache line assigned to the QoSID. If so, the cache controller is additionally configured to update the QoSID tracking tag.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: July 4, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Derek Robert Hower, Harold Wade Cain, III
  • Patent number: 9678875
    Abstract: Providing shared cache memory allocation control in shared cached memory systems is disclosed. In one aspect, a cache controller of a shared cache memory system comprising a plurality of cache lines is provided. The cache controller comprises a cache allocation circuit providing a minimum mapping bitmask for mapping a Quality of Service (QoS) class to a minimum partition of the cache lines, and a maximum mapping bitmask for mapping the QoS class to a maximum partition of the cache lines. The cache allocation circuit receives a memory access request comprising a QoS identifier (QoSID) of the QoS class, and is configured to determine whether the memory access request corresponds to a cache line of the plurality of cache lines. If not, the cache allocation circuit selects, as a target partition, the minimum partition mapped to the QoS class or the maximum partition mapped to the QoS class.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: June 13, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Derek Robert Hower, Harold Wade Cain, III
  • Publication number: 20170147249
    Abstract: Systems and methods relate to distributed allocation of bandwidth for accessing a shared memory. A memory controller which controls access to the shared memory, receives requests for bandwidth for accessing the shared memory from a plurality of requesting agents. The memory controller includes a saturation monitor to determine a saturation level of the bandwidth for accessing the shared memory. A request rate governor at each requesting agent determines a target request rate for the requesting agent based on the saturation level and a proportional bandwidth share allocated to the requesting agent, the proportional share based on a Quality of Service (QoS) class of the requesting agent.
    Type: Application
    Filed: June 24, 2016
    Publication date: May 25, 2017
    Inventors: Derek Robert HOWER, Harold Wade CAIN III, Carl Lan WALDSPURGER
  • Publication number: 20170091117
    Abstract: A cache fill line is received, including an index, a thread identifier, and cache fill line data. The cache is probed, using the index and a different thread identifier, for a potential duplicate cache line. The potential duplicate cache line includes cache line data and the different thread identifier. Upon the cache fill line data matching the cache line data, duplication is identified. The potential duplicate cache line is set as a shared resident cache line, and the thread share permission tag is set to a permission state.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 30, 2017
    Inventors: Harold Wade CAIN, III, Derek Robert HOWER, Raguram DAMODARAN, Thomas Andrew SARTORIUS
  • Publication number: 20170060750
    Abstract: Method and apparatus for cache way prediction using a plurality of partial tags are provided. In a cache-block address comprising a plurality of sets and a plurality of ways or lines, one of the sets is selected for indexing, and a plurality of distinct partial tags are identified for the selected set. A determination is made as to whether a partial tag for a new line collides with any of the partial tags for current resident lines in the selected set. If the partial tag for the new line does not collide with any of the partial tags for the current resident lines, then there is no aliasing. If the partial tag for the new line collides with any of the partial tags for the current resident lines, then aliasing may be avoided by reading the full tag array and updating the partial tags.
    Type: Application
    Filed: September 2, 2015
    Publication date: March 2, 2017
    Inventors: Anil KRISHNA, Gregory Michael WRIGHT, Derek Robert HOWER
  • Publication number: 20160147656
    Abstract: Providing shared cache memory allocation control in shared cached memory systems is disclosed. In one aspect, a cache controller of a shared cache memory system comprising a plurality of cache lines is provided. The cache controller comprises a cache allocation circuit providing a minimum mapping bitmask for mapping a Quality of Service (QoS) class to a minimum partition of the cache lines, and a maximum mapping bitmask for mapping the QoS class to a maximum partition of the cache lines. The cache allocation circuit receives a memory access request comprising a QoS identifier (QoSID) of the QoS class, and is configured to determine whether the memory access request corresponds to a cache line of the plurality of cache lines. If not, the cache allocation circuit selects, as a target partition, the minimum partition mapped to the QoS class or the maximum partition mapped to the QoS class.
    Type: Application
    Filed: September 22, 2015
    Publication date: May 26, 2016
    Inventors: Derek Robert Hower, Harold Wade Cain, III
  • Publication number: 20160147655
    Abstract: Generating approximate usage measurements for shared cache memory systems is disclosed. In one aspect, a cache memory system is provided. The cache memory system comprises a shared cache memory system. A subset of the shared cache memory system comprises a Quality of Service identifier (QoSID) tracking tag configured to store a QoSID tracking indicator for a QoS class. The shared cache memory system further comprises a cache controller configured to receive a memory access request comprising a QoSID, and is configured to access a cache line corresponding to the memory access request. The cache controller is also configured to determine whether the QoSID of the memory access request corresponds to a cache line assigned to the QoSID. If so, the cache controller is additionally configured to update the QoSID tracking tag.
    Type: Application
    Filed: September 22, 2015
    Publication date: May 26, 2016
    Inventors: Derek Robert Hower, Harold Wade Cain, III
  • Publication number: 20150363322
    Abstract: A method and system includes generating a cache entry comprising cache line data for a plurality of cache clients and receiving a cache invalidate instruction from a first of the plurality of cache clients. In response to the cache invalidate instruction, the data valid/invalid state is changed for the first cache client to an invalid state without modifying the data valid/invalid state for the other of the plurality of cache clients from the valid state. A read instruction may be received from a second of the plurality of cache clients and in response to the read instruction, a value stored in the cache line data is returned to the second cache client while the data valid/invalid state for the first cache client is in the invalid state and the data valid/invalid state for the second cache client is in the valid state.
    Type: Application
    Filed: July 21, 2014
    Publication date: December 17, 2015
    Inventors: LEE WILLIAM HOWES, BENEDICT RUEBEN GASTER, DEREK ROBERT HOWER