Patents by Inventor Liqun Cheng

Liqun Cheng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10303604
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: May 28, 2019
    Assignee: Google LLC
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Publication number: 20190155658
    Abstract: Methods, systems, and computer storage media storing instructions for managing processing system efficiency. One of the methods includes obtaining data splitting a plurality of general-purpose processing units in a processing system into a high-priority domain and a low-priority domain, wherein the general-purpose processing units in the high-priority domain are assigned to perform one or more tasks comprising one or more high-priority tasks, and the general-purpose processing units in the low-priority domain are assigned to perform one or more low-priority tasks; and during runtime of the processing system, obtaining memory usage measurements that characterize usage of system memory by the high-priority domain and the low-priority domain; and adjusting, based on the memory usage measurements, a configuration of (i) the high-priority domain, (ii) the low-priority domain, or (iii) both to adjust utilization of the system memory by the general-purpose processing units.
    Type: Application
    Filed: November 21, 2018
    Publication date: May 23, 2019
    Inventors: Liqun Cheng, Rama Krishna Govindaraju, Haishan Zhu, David Lo, Parthasarathy Ranganathan, Nishant Patil
  • Patent number: 10216636
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Grant
    Filed: July 26, 2018
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Patent number: 10218779
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for machine level resource distribution are disclosed. In one aspect, a method is implemented in a data processing apparatus, which includes, for each server computer in a set of two or more server computers within a data center, wherein each server computer includes a plurality of processing cores, receiving wear data describing, for each processing core of the server computer, a wear level for the processing core that is indicative of accumulated wear of the processing core, and moderating accumulation of wear in the processor cores based on the wear level of the processing cores from at least two different server computers.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Liqun Cheng, Rama Krishna Govindaraju, Parthasarathy Ranganathan
  • Patent number: 10191672
    Abstract: An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: January 29, 2019
    Assignee: Google LLC
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan, Michael R. Marty, Andrew Gallatin
  • Publication number: 20180336137
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Application
    Filed: July 26, 2018
    Publication date: November 22, 2018
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Patent number: 10055350
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Grant
    Filed: November 5, 2014
    Date of Patent: August 21, 2018
    Assignee: Google LLC
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Publication number: 20170153977
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Application
    Filed: February 10, 2017
    Publication date: June 1, 2017
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Publication number: 20170109082
    Abstract: An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.
    Type: Application
    Filed: October 16, 2015
    Publication date: April 20, 2017
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan, Michael R. Marty, Andrew Gallatin
  • Patent number: 9600417
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Grant
    Filed: April 29, 2015
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Patent number: 9594687
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for pre-fetching content. One of the systems includes a pre-fetcher configured to perform operations including determining, for a virtual machine executing on a device and using a first virtual machine physical address associated with the virtual machine, a second virtual machine physical address for data to pre-fetch for the execution of the virtual machine on the device, determining, using the second virtual machine physical address and an address mapping that associates virtual machine physical addresses for the virtual machine with device physical addresses for the device, a device physical address for the data, and requesting the data from a memory using the device physical address.
    Type: Grant
    Filed: April 14, 2015
    Date of Patent: March 14, 2017
    Assignee: Google Inc.
    Inventors: Richard Yoo, Liqun Cheng, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Publication number: 20160321176
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Application
    Filed: April 29, 2015
    Publication date: November 3, 2016
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Publication number: 20160306743
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for pre-fetching content. One of the systems includes a pre-fetcher configured to perform operations including determining, for a virtual machine executing on a device and using a first virtual machine physical address associated with the virtual machine, a second virtual machine physical address for data to pre-fetch for the execution of the virtual machine on the device, determining, using the second virtual machine physical address and an address mapping that associates virtual machine physical addresses for the virtual machine with device physical addresses for the device, a device physical address for the data, and requesting the data from a memory using the device physical address.
    Type: Application
    Filed: April 14, 2015
    Publication date: October 20, 2016
    Inventors: Richard Yoo, Liqun Cheng, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Patent number: 9436258
    Abstract: Methods, systems, and apparatus for dynamic service level objective power control in a datacenter. In one aspect, a method includes determining a current service level value that measures a current performance of the service by a set of processing devices performing the service, the service having an associated service level objective value; and for each processing device: when the current service level value does not meet the service level objective value of the service, generating a first control signal that causes a processing device performing the service to operate at a first power consumption level; and when the current service level value does meet the service level objective value of the service, generating a control signal that cause the processing device to operate at a reduced power consumption level that is less than the first power consumption level.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: September 6, 2016
    Assignee: Google Inc.
    Inventors: David Lo, Liqun Cheng, Rama Krishna Govindaraju
  • Publication number: 20150324293
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for pre-fetching data. The methods, systems, and apparatus include actions of providing a request for data to an input-output device and receiving a set of memory addresses for the requested data. Additional actions include determining a subset of the memory addresses, providing a request for a processor to pre-fetch or inject data corresponding to the subset of the memory addresses, and receiving the requested data and the set of memory addresses. Additional actions include determining that the received data includes data for the subset of memory addresses that has been requested to be pre-fetched or injected, storing the data for the subset of memory addresses in a cache of the processor, and storing remaining data of the received data for the memory addresses in a main memory.
    Type: Application
    Filed: November 5, 2014
    Publication date: November 12, 2015
    Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan
  • Patent number: 9015415
    Abstract: An apparatus is described that includes a plurality of processors, a plurality of cache slices and respective cache agents. Each of the cache agents have a buffer to store requests from the processors. The apparatus also includes a network between the processors and the cache slices to carry traffic of transactions that invoke the processors and/or said cache agents. The apparatus also includes communication resources between the processors and the cache agents reserved to transport one or more warnings from one or more of the cache agents to the processors that the one or more cache agents' respective buffers have reached a storage capacity threshold.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: April 21, 2015
    Assignee: Intel Corporation
    Inventors: Ankush Varma, Adrian C. Moga, Liqun Cheng
  • Patent number: 8412885
    Abstract: In an embodiment of the present invention a method includes: sending request for data to a memory controller; arranging the request for data by order of importance or priority; identifying a source of the request for data; and if the source is an input/output device, masking off P ways in a cache; and allocating ways in filling the cache. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 12, 2009
    Date of Patent: April 2, 2013
    Assignee: Intel Corporation
    Inventors: Liqun Cheng, Zhen Fang, Jeffrey Wilder, Sadagopan Srinivasan, Ravishankar Iyer, Donald Newell
  • Publication number: 20120079186
    Abstract: An apparatus is described that includes a plurality of processors, a plurality of cache slices and respective cache agents. Each of the cache agents have a buffer to store requests from the processors. The apparatus also includes a network between the processors and the cache slices to carry traffic of transactions that invoke the processors and/or said cache agents. The apparatus also includes communication resources between the processors and the cache agents reserved to transport one or more warnings from one or more of the cache agents to the processors that the one or more cache agents' respective buffers have reached a storage capacity threshold.
    Type: Application
    Filed: September 24, 2010
    Publication date: March 29, 2012
    Inventors: Ankush Varma, Adrian C. Moga, Liqun Cheng
  • Patent number: 8131944
    Abstract: In one embodiment, the present invention includes a method for receiving a cache coherency message in an interconnect router from a caching agent, mapping the message to a criticality level according to a predetermined mapping, and appending the criticality level to each flow control unit of the message, which can be transmitted from the interconnect router based at least in part on the criticality level. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 30, 2008
    Date of Patent: March 6, 2012
    Assignee: Intel Corporation
    Inventors: Zhen Fang, Liqun Cheng, Sriram R. Vangal
  • Publication number: 20110113198
    Abstract: The present invention discloses a method comprising: sending request for data to a memory controller; arranging the request for data by order of importance or priority; identifying a source of the request for data; if the source is an input/output device, masking off P ways in a cache; and allocating ways in filling the cache. The method further includes extending cache allocation logic to control a tag comparison operation by using a bit to provide a hint from IO devices that certain ways will not have requested data.
    Type: Application
    Filed: November 12, 2009
    Publication date: May 12, 2011
    Inventors: Liqun Cheng, Zhen Fang, Jeffrey Wilder, Sadagopan Srinivasan, Ravishankar Iyer, Donald Newell