Patents by Inventor Liqun Cheng
Liqun Cheng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11960936Abstract: The subject matter described herein provides systems and techniques to address the challenges of growing hardware and workload heterogeneity using a Warehouse-Scale Computer (WSC) design that improves the efficiency and utilization of WSCs. The WSC design may include an abstraction layer and an efficiency layer in the software stack of the WSC. The abstraction layer and the efficiency layer may be designed to improve job scheduling, simplify resource management, and drive hardware-software co-optimization using machine learning techniques and automation in order to customize the WSC for applications at scale. The abstraction layer may embrace platform/hardware and workload diversity through greater coordination between hardware and higher layers of the WSC software stack in the WSC design. The efficiency layer may employ machine learning techniques at scale to realize hardware/software co-optimizations as a part of the autonomous WSC design.Type: GrantFiled: January 15, 2021Date of Patent: April 16, 2024Assignee: Google LLCInventors: David Lo, Liqun Cheng, Parthasarathy Ranganathan, Sundar Jayakumar Dev
-
Publication number: 20240037373Abstract: Aspects of the disclosure are directed to jointly searching machine learning model architectures and hardware architectures in a combined space of models, hardware, and mapping strategies. A search strategy is utilized where all models, hardware, and mappings are evaluated together at once via weight sharing and a supernetwork. A multi-objective reward function is utilized with objectives for quality, performance, power, and area.Type: ApplicationFiled: July 28, 2022Publication date: February 1, 2024Inventors: Sheng Li, Norman Paul Jouppi, Garrett Axel Andersen, Quoc V. Le, Liqun Cheng, Parthasarathy Ranganathan
-
Publication number: 20230297580Abstract: According to various implementations, generally disclosed herein is a hybrid and hierarchical neural architecture search (NAS) approach. The approach includes performing a search space partitioning scheme to divide the search space into sub-search spaces. The approach further includes performing a first type of NAS, such as a Multi-trial NAS, to cover a search across the sub-search spaces. The approach also includes performing a second type of NAS, such as a One-Shot NAS, to cover each sub-search space. The approach further includes automatically stopping the second type of NAS based on one or more early stopping criteria.Type: ApplicationFiled: April 15, 2022Publication date: September 21, 2023Inventors: Sheng Li, Garrett Axel Andersen, Norman Paul Jouppi, Quoc V. Le, Liqun Cheng, Parthasarathy Ranganathan, Julian Paul Grady, Yang Li, Martin Wicke, Yifeng Lu, Yun Ni, Kun Wang
-
Patent number: 11704158Abstract: Methods, systems, and computer storage media storing instructions for managing processing system efficiency. One of the methods includes obtaining data splitting a plurality of general-purpose processing units in a processing system into a high-priority domain and a low-priority domain, wherein the general-purpose processing units in the high-priority domain are assigned to perform one or more tasks comprising one or more high-priority tasks, and the general-purpose processing units in the low-priority domain are assigned to perform one or more low-priority tasks; and during runtime of the processing system, obtaining memory usage measurements that characterize usage of system memory by the high-priority domain and the low-priority domain; and adjusting, based on the memory usage measurements, a configuration of (i) the high-priority domain, (ii) the low-priority domain, or (iii) both to adjust utilization of the system memory by the general-purpose processing units.Type: GrantFiled: January 29, 2021Date of Patent: July 18, 2023Assignee: Google LLCInventors: Liqun Cheng, Rama Krishna Govindaraju, Haishan Zhu, David Lo, Parthasarathy Ranganathan, Nishant Patil
-
Publication number: 20230222000Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scheduling operations represented as a computational graph on a distributed computing network. A method includes: receiving data representing operations to be executed in order to perform a job on a plurality of hardware accelerators of a plurality of different accelerator types; generating, for the job and from at least the data representing the operations, features that represent a predicted performance for the job on hardware accelerators of the plurality of different accelerator types; generating, from the features, a respective predicted performance metric for the job for each of the plurality of different accelerator types according to a performance objective function; and providing, to a scheduling system, one or more recommendations for scheduling the job on one or more recommended types of hardware accelerators.Type: ApplicationFiled: December 30, 2022Publication date: July 13, 2023Inventors: Sheng Li, Brian Zhang, Liqun Cheng, Norman Paul Jouppi, Yun Ni
-
Publication number: 20230108177Abstract: Aspects of the disclosure provide for hardware-aware progressive training of machine learning models. A training system trains a model in accordance with a training process and different values specified in a training schedule for both hardware-level and model-level performance settings. Hardware-level performance settings can cause hardware features of computing resources used to train the model to be enabled, disabled, or modified at various points during training. Model-level performance settings can take on a variety of values to adjust characteristics of the machine learning model being trained or of the training process, during different stages of training. The training system can identify and apply complementary values of hardware- and model-level performance settings to generate training schedules that improve model training speed at earlier stages of training, while improving model quality at later stages of training.Type: ApplicationFiled: August 31, 2022Publication date: April 6, 2023Inventors: Sheng Li, Mingxing Tan, Norman Paul Jouppi, Quoc V. Le, Liqun Cheng, Ruoming Pang, Parthasarathy Ranganathan
-
Patent number: 11544105Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scheduling operations represented as a computational graph on a distributed computing network. A method includes: receiving data representing operations to be executed in order to perform a job on a plurality of hardware accelerators of a plurality of different accelerator types; generating, for the job and from at least the data representing the operations, features that represent a predicted performance for the job on hardware accelerators of the plurality of different accelerator types; generating, from the features, a respective predicted performance metric for the job for each of the plurality of different accelerator types according to a performance objective function; and providing, to a scheduling system, one or more recommendations for scheduling the job on one or more recommended types of hardware accelerators.Type: GrantFiled: October 11, 2019Date of Patent: January 3, 2023Assignee: Google LLCInventors: Sheng Li, Brian Zhang, Liqun Cheng, Norman Paul Jouppi, Yun Ni
-
Publication number: 20220229698Abstract: The subject matter described herein provides systems and techniques to address the challenges of growing hardware and workload heterogeneity using a Warehouse-Scale Computer (WSC) design that improves the efficiency and utilization of WSCs. The WSC design may include an abstraction layer and an efficiency layer in the software stack of the WSC. The abstraction layer and the efficiency layer may be designed to improve job scheduling, simplify resource management, and drive hardware-software co-optimization using machine learning techniques and automation in order to customize the WSC for applications at scale. The abstraction layer may embrace platform/hardware and workload diversity through greater coordination between hardware and higher layers of the WSC software stack in the WSC design. The efficiency layer may employ machine learning techniques at scale to realize hardware/software co-optimizations as a part of the autonomous WSC design.Type: ApplicationFiled: January 15, 2021Publication date: July 21, 2022Inventors: David Lo, Liqun Cheng, Parthasarathy Ranganathan, Sundar Jayakumar Dev
-
Publication number: 20220230048Abstract: Methods, systems, and apparatus, including computer-readable media, for scaling neural network architectures on hardware accelerators. A method includes receiving training data and information specifying target computing resources, and performing using the training data, a neural architecture search over a search space to identify an architecture for a base neural network. A plurality of scaling parameter values for scaling the base neural network can be identified, which can include repeatedly selecting a plurality of candidate scaling parameter values, and determining a measure of performance for the base neural network scaled according to the plurality of candidate scaling parameter values, in accordance with a plurality of second objectives including a latency objective. An architecture for a scaled neural network can be determined using the architecture of the base neural network scaled according to the plurality of scaling parameter values.Type: ApplicationFiled: February 12, 2021Publication date: July 21, 2022Inventors: Andrew Li, Sheng Li, Mingxing Tan, Ruoming Pang, Liqun Cheng, Quoc V. Le, Norman Paul Jouppi
-
Publication number: 20220083493Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for performing asymmetric data communication at a host-device interface of a system. The methods include identifying devices coupled to a host of the system and generating a system topology that identifies a connectivity of the devices and identifies bus lanes that enable data transfers at the system. The host determines that a first connection between the host and a first device of the multiple devices has an asymmetric bandwidth requirement. The host configures a set of bus lanes of a data bus connecting the first device and the host to allocate a different number of the bus lanes to data egress from the host than to data ingress to the host. The bus lanes are configured to allocate the differing number of bus lanes based on the asymmetric bandwidth requirement of the first connection.Type: ApplicationFiled: November 29, 2021Publication date: March 17, 2022Inventors: Nishant Patil, Liqun Cheng
-
Publication number: 20220019869Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining an architecture for a task neural network that is configured to perform a particular machine learning task on a target set of hardware resources. When deployed on a target set of hardware, such as a collection of datacenter accelerators, the task neural network may be capable of performing the particular machine learning task with enhanced accuracy and speed.Type: ApplicationFiled: September 30, 2020Publication date: January 20, 2022Inventors: Sheng Li, Norman Paul Jouppi, Quoc V. Le, Mingxing Tan, Ruoming Pang, Liqun Cheng, Andrew Li
-
Patent number: 11188494Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for performing asymmetric data communication at a host-device interface of a system. The methods include identifying devices coupled to a host of the system and generating a system topology that identifies a connectivity of the devices and identifies bus lanes that enable data transfers at the system. The host determines that a first connection between the host and a first device of the multiple devices has an asymmetric bandwidth requirement. The host configures a set of bus lanes of a data bus connecting the first device and the host to allocate a different number of the bus lanes to data egress from the host than to data ingress to the host. The bus lanes are configured to allocate the differing number of bus lanes based on the asymmetric bandwidth requirement of the first connection.Type: GrantFiled: July 29, 2019Date of Patent: November 30, 2021Assignee: Google LLCInventors: Nishant Patil, Liqun Cheng
-
Publication number: 20210224129Abstract: Methods, systems, and computer storage media storing instructions for managing processing system efficiency. One of the methods includes obtaining data splitting a plurality of general-purpose processing units in a processing system into a high-priority domain and a low-priority domain, wherein the general-purpose processing units in the high-priority domain are assigned to perform one or more tasks comprising one or more high-priority tasks, and the general-purpose processing units in the low-priority domain are assigned to perform one or more low-priority tasks; and during runtime of the processing system, obtaining memory usage measurements that characterize usage of system memory by the high-priority domain and the low-priority domain; and adjusting, based on the memory usage measurements, a configuration of (i) the high-priority domain, (ii) the low-priority domain, or (iii) both to adjust utilization of the system memory by the general-purpose processing units.Type: ApplicationFiled: January 29, 2021Publication date: July 22, 2021Inventors: Liqun Cheng, Rama Krishna Govindaraju, Haishan Zhu, David Lo, Parthasarathy Ranganathan, Nishant Patil
-
Publication number: 20210073028Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scheduling operations represented as a computational graph on a distributed computing network. A method includes: receiving data representing operations to be executed in order to perform a job on a plurality of hardware accelerators of a plurality of different accelerator types; generating, for the job and from at least the data representing the operations, features that represent a predicted performance for the job on hardware accelerators of the plurality of different accelerator types; generating, from the features, a respective predicted performance metric for the job for each of the plurality of different accelerator types according to a performance objective function; and providing, to a scheduling system, one or more recommendations for scheduling the job on one or more recommended types of hardware accelerators.Type: ApplicationFiled: October 11, 2019Publication date: March 11, 2021Inventors: Sheng Li, Brian Zhang, Liqun Cheng, Norman Paul Jouppi, Yun Ni
-
Patent number: 10908964Abstract: Methods, systems, and computer storage media storing instructions for managing processing system efficiency. One of the methods includes obtaining data splitting a plurality of general-purpose processing units in a processing system into a high-priority domain and a low-priority domain, wherein the general-purpose processing units in the high-priority domain are assigned to perform one or more tasks comprising one or more high-priority tasks, and the general-purpose processing units in the low-priority domain are assigned to perform one or more low-priority tasks; and during runtime of the processing system, obtaining memory usage measurements that characterize usage of system memory by the high-priority domain and the low-priority domain; and adjusting, based on the memory usage measurements, a configuration of (i) the high-priority domain, (ii) the low-priority domain, or (iii) both to adjust utilization of the system memory by the general-purpose processing units.Type: GrantFiled: November 21, 2018Date of Patent: February 2, 2021Assignee: Google LLCInventors: Liqun Cheng, Rama Krishna Govindaraju, Haishan Zhu, David Lo, Parthasarathy Ranganathan, Nishant Patil
-
Patent number: 10884928Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.Type: GrantFiled: April 9, 2019Date of Patent: January 5, 2021Assignee: Google LLCInventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
-
Publication number: 20200371984Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, are described for performing asymmetric data communication at a host-device interface of a system. The methods include identifying devices coupled to a host of the system and generating a system topology that identifies a connectivity of the devices and identifies bus lanes that enable data transfers at the system. The host determines that a first connection between the host and a first device of the multiple devices has an asymmetric bandwidth requirement. The host configures a set of bus lanes of a data bus connecting the first device and the host to allocate a different number of the bus lanes to data egress from the host than to data ingress to the host. The bus lanes are configured to allocate the differing number of bus lanes based on the asymmetric bandwidth requirement of the first connection.Type: ApplicationFiled: July 29, 2019Publication date: November 26, 2020Inventors: Nishant Patil, Liqun Cheng
-
Patent number: 10481811Abstract: An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.Type: GrantFiled: January 8, 2019Date of Patent: November 19, 2019Assignee: Google LLCInventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan, Michael R. Marty, Andrew Gallatin
-
Publication number: 20190236010Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.Type: ApplicationFiled: April 9, 2019Publication date: August 1, 2019Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
-
Publication number: 20190163381Abstract: An example method includes during execution of a software application by a processor, receiving, by a copy processor separate from the processor, a request for an asynchronous data copy operation to copy data within a memory accessible by the copy processor, wherein the request is received from a copy manager accessible by the software application in a user space of an operating system managing execution of the software application; in response to the request, initiating, by the copy processor, the asynchronous data copy operation; continuing execution of the software application by the processor; determining, by the copy processor, that the asynchronous data copy operation has completed; and in response to determining that the asynchronous copy operation has completed, selectively notifying, by the copy processor, the software application that the asynchronous copy operation has completed.Type: ApplicationFiled: January 8, 2019Publication date: May 30, 2019Inventors: Rama Krishna Govindaraju, Liqun Cheng, Parthasarathy Ranganathan, Michael R. Marty, Andrew Gallatin