Patents by Inventor Yanping Huang
Yanping Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240117520Abstract: The present disclosure discloses a gradient single-crystal positive electrode material, which has a chemical formula of LiNixCoyA1-x-yO2@mLiaZbOc, wherein 0<x<1, 0<y<1, 0<x+y<1, 0<m<0.05, 0.3<a?10, 1?b<4, and 1?c<15, A is at least one of Mn, Zr, Sr, Ba, W, Ti, Al, Mg, Y, and Nb, and Z is at least one of B, Al, Co, W, Ti, Zr, and Si. The atomic ratio of the content of Co on the surface of the single-crystal positive electrode material particle to the content of Ni+Co+A on the surface is greater than 0.4 and less than 0.8, and the atomic ratio of Co at a depth 10% of the radius from the surface of the single crystal positive electrode material particle is not less than 0.3; and the single-crystal positive electrode material particle has a roundness of greater than 0.4, and is free from sharp corners.Type: ApplicationFiled: November 11, 2022Publication date: April 11, 2024Inventors: Jinsuo LI, Di CHENG, Yunjun XU, Gaofeng ZUO, Jing HUANG, Xiaojing LI, Danfeng CHEN, Wanchao WEN, Yanping WANG, Zhengzhong YIN
-
Publication number: 20240112027Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing neural architecture search for machine learning models. In one aspect, a method comprises receiving training data for a machine learning, generating a plurality of candidate neural networks for performing the machine learning task, wherein each candidate neural network comprises a plurality of instances of a layer block composed of a plurality of layers, for each candidate neural network, selecting a respective type for each of the plurality of layers from a set of layer types that comprises, training the candidate neural network and evaluating performance scores for the trained candidate neural networks as applied to the machine learning task, and determining a final neural network for performing the machine learning task based at least on the performance scores for the candidate neural networks.Type: ApplicationFiled: September 28, 2023Publication date: April 4, 2024Inventors: Yanqi Zhou, Yanping Huang, Yifeng Lu, Andrew M. Dai, Siamak Shakeri, Zhifeng Chen, James Laudon, Quoc V. Le, Da Huang, Nan Du, David Richard So, Daiyi Peng, Yingwei Cui, Jeffrey Adgate Dean, Chang Lan
-
Publication number: 20230259784Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.Type: ApplicationFiled: April 27, 2023Publication date: August 17, 2023Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
-
Publication number: 20230222318Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing machine learning task on a network input to generate a network output. In one aspect, one of the systems includes an attention neural network configured to perform the machine learning task, the attention neural network including one or more attention layers, each attention layer comprising an attention sub-layer and a feed-forward sub-layer. Some or all of the attention layers have a feed-forward sub-layer that applies conditional computation to the inputs to the sub-layer.Type: ApplicationFiled: June 30, 2021Publication date: July 13, 2023Inventors: Dmitry Lepikhin, Yanping Huang, Orhan Firat, Maxim Krikun, Dehao Chen, Noam M. Shazeer, HyoukJoong Lee, Yuanzhong Xu, Zhifeng Chen
-
Patent number: 11669744Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.Type: GrantFiled: September 14, 2021Date of Patent: June 6, 2023Assignee: Google LLCInventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
-
Publication number: 20220237435Abstract: Systems and methods for routing in mixture-of-expert models. In some aspects of the technology, a transformer may have at least one Mixture-of-Experts (“MoE”) layer in each of its encoder and decoder, with the at least one MoE layer of the encoder having a learned gating function configured to route each token of a task to two or more selected expert feed-forward networks, and the at least one MoE layer of the decoder having a learned gating function configured to route each task to two or more selected expert feed-forward networks.Type: ApplicationFiled: January 27, 2021Publication date: July 28, 2022Applicant: Google LLCInventors: Yanping Huang, Dmitry Lepikhin, Maxim Krikun, Orhan Firat, Ankur Bapna, Thang Luong, Sneha Kudugunta
-
Publication number: 20220211662Abstract: Disclosed is an application of Epigallocatechin Gallate (EGCG) or a compound synthesized with it as a lead compound in combination with one or more tyrosine kinase inhibitors in preparation of a cancer treatment drug, herein the cancer is an Epidermal Growth Factor Receptor (EGFR) wild-type tumor. This combined use may significantly inhibit the growth of the EGFR wild-type tumor, and reduce the toxic and side effects of an anticancer drug.Type: ApplicationFiled: May 7, 2019Publication date: July 7, 2022Inventors: Jun Sheng, Xuanjun Wang, Yanping Huang, Chengting Zi, Zemin Xiang, Yewei Huang, Yunli Zhao, Xiangdan Cuan, Huanhuan Xu, Rui Luo
-
Publication number: 20220121945Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.Type: ApplicationFiled: January 3, 2022Publication date: April 21, 2022Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
-
Publication number: 20220085344Abstract: Disclosed are a button battery and a manufacturing method therefor, the button battery includes an upper cover plate, a pole, an insulating sleeve, a bottom shell, a battery winding core and a sealing ball. The upper cover plate is provided with a liquid injection hole and a stepped through hole with a small upper portion and a large lower portion; upper and lower ends of the pole are respectively a cylindrical portion and a head portion; the insulating sleeve is sheathed on the pole that penetrates through the stepped through hole, the head and cylindrical portions correspond to lower and upper end of the stepped through hole respectively; the battery winding core is arranged in an inner cavity formed by the upper cover plate and the bottom shell; and the sealing ball is arranged on the upper cover plate and seals the liquid injection hole.Type: ApplicationFiled: October 29, 2020Publication date: March 17, 2022Inventors: Yundong Lu, Huaigai Kuang, Keshun Chen, Yanping Huang, Xiaobin Yang, Taichun Tang, Cheng Li, Shidong Huang
-
Patent number: 11232356Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.Type: GrantFiled: August 10, 2020Date of Patent: January 25, 2022Assignee: Google LLCInventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
-
Publication number: 20220004879Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.Type: ApplicationFiled: September 14, 2021Publication date: January 6, 2022Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
-
Patent number: 11144831Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.Type: GrantFiled: June 19, 2020Date of Patent: October 12, 2021Assignee: Google LLCInventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
-
Publication number: 20210042620Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.Type: ApplicationFiled: August 10, 2020Publication date: February 11, 2021Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
-
Publication number: 20200320399Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.Type: ApplicationFiled: June 19, 2020Publication date: October 8, 2020Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
-
Patent number: 10130650Abstract: Compositions and methods are provided for the treatment and/or prevention of an inflammatory and/or autoimmune disease or disorder.Type: GrantFiled: January 27, 2015Date of Patent: November 20, 2018Assignees: The Children's Hospital of Philadelphia, The Trustees of The University of PennsylvaniaInventors: Yanping Huang, Janis K. Burkhardt, Taku Kambayashi
-
Publication number: 20160339053Abstract: Compositions and methods are provided for the treatment and/or prevention of an inflammatory and/or autoimmune disease or disorder.Type: ApplicationFiled: January 27, 2015Publication date: November 24, 2016Inventors: Yanping Huang, Janis K. Burkhardt, Taku Kambayashi
-
Patent number: 9378277Abstract: Disclosed are various embodiments for a search query segmentation application. Search queries are broken into segments. Each of the segments is assigned a taxonomy node from a catalog of items. Search results are generated as those items included in the taxonomy nodes assigned to the search query segments.Type: GrantFiled: February 8, 2013Date of Patent: June 28, 2016Assignee: Amazon Technologies, Inc.Inventors: Lam Duy Nguyen, Nigel St. John Pope, Yanping Huang
-
Publication number: 20110160582Abstract: A wireless ultrasonic scanning system comprises an ultrasonic sensor, a motor, an ultrasonic transceiver, a high-speed data sampling module, a motor controller and a master control module. The ultrasonic sensor, which moves in accordance with control of the motor controller, is mounted on the motor. The ultrasonic transceiver activates the ultrasonic sensor and amplifies a received ultrasonic signal. The high-speed data sampling module wirelessly transmits radio-frequency ultrasonic data to the master control module. The master control module sets a scanning mode, initiates a scanning process, and wirelessly transmits control signals and control parameters to the high-speed data sampling module.Type: ApplicationFiled: April 28, 2009Publication date: June 30, 2011Inventors: Yongping Zheng, Xin Chen, James Chungwai Cheung, Junfeng He, Yanping Huang, Zhengming Huang
-
Patent number: D913652Type: GrantFiled: June 2, 2020Date of Patent: March 23, 2021Inventor: Yanping Huang