Patents by Inventor David Scott Ray
David Scott Ray has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11550723Abstract: An apparatus, method, and system for memory bandwidth aware data prefetching is presented. The method may comprise monitoring a number of request responses received in an interval at a current prefetch request generation rate, comparing the number of request responses received in the interval to at least a first threshold, and adjusting the current prefetch request generation rate to an updated prefetch request generation rate by selecting the updated prefetch request generation rate from a plurality of prefetch request generation rates, based on the comparison. The request responses may be NACK or RETRY responses. The method may further comprise either retaining a current prefetch request generation rate or selecting a maximum prefetch request generation rate as the updated prefetch request generation rate in response to an indication that prefetching is accurate.Type: GrantFiled: August 27, 2018Date of Patent: January 10, 2023Assignee: Qualcomm IncorporatedInventors: Niket Choudhary, David Scott Ray, Thomas Philip Speier, Eric Robinson, Harold Wade Cain, III, Nikhil Narendradev Sharma, Joseph Gerald McDonald, Brian Michael Stempel, Garrett Michael Drapala
-
Patent number: 11061822Abstract: A method, apparatus, and system for reducing pipeline stalls due to address translation misses is presented. An apparatus comprises a memory access instruction pipeline, a translation lookaside buffer coupled to the memory access instruction pipeline, and a TLB miss queue coupled to both the TLB and the memory access instruction pipeline. The TLB miss queue is configured to selectively store a first memory access instruction that has been removed from the memory access instruction pipeline as a result of the first memory access instruction missing in the TLB along with information associated with the first memory access instruction. The TLB miss queue is further configured to reintroduce the first memory access instruction to the memory access instruction pipeline associated with a return of an address translation related to the first memory access instruction.Type: GrantFiled: August 27, 2018Date of Patent: July 13, 2021Assignee: Qualcomm IncorporatedInventors: Pritha Ghoshal, Niket Choudhary, Ravi Rajagopalan, Patrick Eibl, Brian Stempel, David Scott Ray, Thomas Philip Speier
-
Publication number: 20200175560Abstract: A system, method, computer program product, and mobile receiving station for receiving and processing leaf tobacco at a location of a tobacco farmer, including processing the tobacco leaf at the location, including means for weighing and grading of the received tobacco leaf and, optionally, determining moisture content and transmitting information relating to the optionally determined moisture content, the weight, and the grade of the received tobacco leaf to a tobacco product manufacturing facility over a communications link.Type: ApplicationFiled: February 10, 2020Publication date: June 4, 2020Applicant: Altria Client Services LLCInventors: Hal L. Teegarden, David Scott Ray, John Stewart Livesay, Isidro Gomez
-
Publication number: 20200065247Abstract: An apparatus, method, and system for memory bandwidth aware data prefetching is presented. The method may comprise monitoring a number of request responses received in an interval at a current prefetch request generation rate, comparing the number of request responses received in the interval to at least a first threshold, and adjusting the current prefetch request generation rate to an updated prefetch request generation rate by selecting the updated prefetch request generation rate from a plurality of prefetch request generation rates, based on the comparison. The request responses may be NACK or RETRY responses. The method may further comprise either retaining a current prefetch request generation rate or selecting a maximum prefetch request generation rate as the updated prefetch request generation rate in response to an indication that prefetching is accurate.Type: ApplicationFiled: August 27, 2018Publication date: February 27, 2020Inventors: Niket CHOUDHARY, David Scott RAY, Thomas Philip SPEIER, Eric ROBINSON, Harold Wade CAIN, III, Nikhil Narendradev SHARMA, Joseph Gerald MCDONALD, Brian Michael STEMPEL, Garrett Michael DRAPALA
-
Publication number: 20200065260Abstract: A method, apparatus, and system for reducing pipeline stalls due to address translation misses is presented. An apparatus comprises a memory access instruction pipeline, a translation lookaside buffer coupled to the memory access instruction pipeline, and a TLB miss queue coupled to both the TLB and the memory access instruction pipeline. The TLB miss queue is configured to selectively store a first memory access instruction that has been removed from the memory access instruction pipeline as a result of the first memory access instruction missing in the TLB along with information associated with the first memory access instruction. The TLB miss queue is further configured to reintroduce the first memory access instruction to the memory access instruction pipeline associated with a return of an address translation related to the first memory access instruction.Type: ApplicationFiled: August 27, 2018Publication date: February 27, 2020Inventors: Pritha GHOSHAL, Niket CHOUDHARY, Ravi RAJAGOPALAN, Patrick EIBL, Brian STEMPEL, David Scott Ray, Thomas Philip SPEIER
-
Patent number: 10572920Abstract: A system, method, computer program product, and mobile receiving station for receiving and processing leaf tobacco at a location of a tobacco farmer, including processing the tobacco leaf at the location, including means for weighing and grading of the received tobacco leaf and, optionally, determining moisture content and transmitting information relating to the optionally determined moisture content, the weight, and the grade of the received tobacco leaf to a tobacco product manufacturing facility over a communications link.Type: GrantFiled: December 18, 2014Date of Patent: February 25, 2020Assignee: PHILIP MORRIS USA INC.Inventors: Hal L. Teegarden, David Scott Ray, John Stewart Livesay, Isidro Gomez
-
Publication number: 20190370176Abstract: Adaptively predicting usefulness of prefetches generated by hardware prefetch engines of processor-based devices is disclosed. In this regard, a processor-based device provides a hardware prefetch engine including a sampler circuit and a predictor circuit. The sampler circuit stores data related to demand requests and prefetch requests directed to memory addresses corresponding to a subset of sets of a cache of the processor-based device. The predictor circuit includes a plurality of confidence counters that correspond to the memory addresses tracked by the sampler circuit, and that indicate a level of confidence in the usefulness of the corresponding memory addresses. The confidence counters provided by the predictor circuit are trained in response to demand request hits and misses (and, in some aspects, prefetch misses) on the memory addresses tracked by the sampler circuit. The predictor circuit may then use the confidence counters to generate usefulness predictions for subsequent prefetch requests.Type: ApplicationFiled: June 1, 2018Publication date: December 5, 2019Inventors: Shivam Priyadarshi, Niket Choudhary, David Scott Ray, Thomas Philip Speier
-
Publication number: 20150178802Abstract: A system, method, computer program product, and mobile receiving station for receiving and processing leaf tobacco at a location of a tobacco farmer, including processing the tobacco leaf at the location, including means for weighing and grading of the received tobacco leaf and, optionally, determining moisture content and transmitting information relating to the optionally determined moisture content, the weight, and the grade of the received tobacco leaf to a tobacco product manufacturing facility over a communications link.Type: ApplicationFiled: December 18, 2014Publication date: June 25, 2015Inventors: Hal L. TEEGARDEN, David Scott RAY, John Stewart LIVESAY, Isidro GOMEZ
-
Patent number: 8156287Abstract: A data processing system includes a processor, a unit that includes a multi-level cache, a prefetch system and a memory. The data processing system can operate in a first mode and a second mode. The prefetch system can change behavior in response to a desired power consumption policy set by an external agent or automatically via hardware based on on-chip power/performance thresholds.Type: GrantFiled: January 15, 2009Date of Patent: April 10, 2012Assignee: International Business Machines CorporationInventors: Pradip Bose, Alper Buyuktosunoglu, Miles Robert Dooley, Michael Stephen Floyd, David Scott Ray, Bruce Joseph Ronchetti
-
Patent number: 8082423Abstract: A method and apparatus are provided for detecting and handling an instruction flush in a microprocessor system. A flush mechanism is provided that is distributed across all of the execution units in a data processing system. The flush mechanism does not require a central collection point to re-distribute the flush signals to the execution units. Each unit generates a flush vector to all other execution units which is used to block register updates for the flushed instructions.Type: GrantFiled: August 11, 2005Date of Patent: December 20, 2011Assignee: International Business Machines CorporationInventors: Christopher Michael Abernathy, Kurt Alan Feiste, David Scott Ray, David Shippy, Albert James Van Norstrand, Jr.
-
Patent number: 7953960Abstract: A pipeline processor has circuits to detect the presence of a register access instruction in an issue stage of the pipeline. A load-miss occurring at a later stage may cause the register access instruction to be marked with an associated bit. The register access instruction progresses down the pipeline and when the flush stage is reached, the processor checks the associated bit and flushes the register access instruction.Type: GrantFiled: October 18, 2005Date of Patent: May 31, 2011Assignee: International Business Machines CorporationInventors: Kurt Alan Feiste, David Scott Ray, David Shippy, Albert James Van Norstrand, Jr.
-
Patent number: 7769985Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.Type: GrantFiled: February 4, 2008Date of Patent: August 3, 2010Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain Alan Hicks, David Scott Ray, David Shippy, Takeki Osanai
-
Publication number: 20100180081Abstract: A data processing system includes a processor, a unit that includes a multi-level cache, a prefetch system and a memory. The data processing system can operate in a first mode and a second mode. The prefetch system can change behavior in response to a desired power consumption policy set by an external agent or automatically via hardware based on on-chip power/performance thresholds.Type: ApplicationFiled: January 15, 2009Publication date: July 15, 2010Inventors: Pradip Bose, Alper Buyuktosunoglu, Miles Robert Dooley, Michael Stephen Floyd, David Scott Ray, Bruce Joseph Ronchetti
-
Patent number: 7730290Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.Type: GrantFiled: February 25, 2008Date of Patent: June 1, 2010Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Publication number: 20090234709Abstract: A system, method, computer program product, and mobile receiving station for receiving and processing leaf tobacco at a location of a tobacco farmer, including processing the tobacco leaf at the location, including means for weighing and grading of the received tobacco leaf and, optionally, determining moisture content and transmitting information relating to the optionally determined moisture content, the weight, and the grade of the received tobacco leaf to a tobacco product manufacturing facility over a communications link.Type: ApplicationFiled: November 20, 2008Publication date: September 17, 2009Applicant: PHILIP MORRIS USA INC.Inventors: Hal L. TEEGARDEN, David Scott RAY, John Stewart LIVESAY, Isidro GOMEZ, II
-
Patent number: 7464242Abstract: A method, an apparatus, and a computer program product are provided for detecting load/store dependency in a memory system by dynamically changing the address width for comparison. An incoming load/store operation must be compared to the operations in the pipeline and the queues to avoid address conflicts. Overall, the present invention introduces a cache hit or cache miss input into the load/store dependency logic. If the incoming load operation is a cache hit, then the quadword boundary address value is used for detection. If the incoming load operation is a cache miss, then the cacheline boundary address value is used for detection. This invention enhances the performance of LHS and LHR operations in a memory system.Type: GrantFiled: February 3, 2005Date of Patent: December 9, 2008Assignee: International Business Machines CorporationInventors: Brian David Barrick, Dwain Alan Hicks, Takeki Osanai, David Scott Ray
-
Publication number: 20080148017Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.Type: ApplicationFiled: February 25, 2008Publication date: June 19, 2008Inventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Publication number: 20080141014Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.Type: ApplicationFiled: February 4, 2008Publication date: June 12, 2008Inventors: Brian David Barrick, Kimberly Marie Fensler, Dwain Alan Hicks, David Scott Ray, David Shippy, Takeki Osanai
-
Patent number: 7376816Abstract: A method is disclosed for executing a load instruction. Address information of the load instruction is used to generate an address of needed data, and the address is used to search a cache memory for the needed data. If the needed data is found in the cache memory, a cache hit signal is generated. At least a portion of the address is used to search a queue for a previous load instruction specifying the same address. If a previous load instruction specifying the same address is found, the cache hit signal is ignored and the load instruction is stored in the queue. A load/store unit, and a processor implementing the method, are also described.Type: GrantFiled: November 12, 2004Date of Patent: May 20, 2008Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fernsler, Dwain A. Hicks, Takeki Osanai, David Scott Ray
-
Patent number: 7363468Abstract: The present invention provides for a method for a load address dependency mechanism in a high frequency, low power processor. A load instruction corresponding to a memory address is received. At least one unexecuted preceding instruction corresponding to the memory address is identified. The load instruction is stored in a miss queue. And the load instruction is tagged as a local miss.Type: GrantFiled: November 18, 2004Date of Patent: April 22, 2008Assignee: International Business Machines CorporationInventors: Brian David Barrick, Kimberly Marie Fensler, Dwain Alan Hicks, David Scott Ray, David Shippy, Takeki Osanai