Patents by Inventor Yanping Huang

Yanping Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240117520
    Abstract: The present disclosure discloses a gradient single-crystal positive electrode material, which has a chemical formula of LiNixCoyA1-x-yO2@mLiaZbOc, wherein 0<x<1, 0<y<1, 0<x+y<1, 0<m<0.05, 0.3<a?10, 1?b<4, and 1?c<15, A is at least one of Mn, Zr, Sr, Ba, W, Ti, Al, Mg, Y, and Nb, and Z is at least one of B, Al, Co, W, Ti, Zr, and Si. The atomic ratio of the content of Co on the surface of the single-crystal positive electrode material particle to the content of Ni+Co+A on the surface is greater than 0.4 and less than 0.8, and the atomic ratio of Co at a depth 10% of the radius from the surface of the single crystal positive electrode material particle is not less than 0.3; and the single-crystal positive electrode material particle has a roundness of greater than 0.4, and is free from sharp corners.
    Type: Application
    Filed: November 11, 2022
    Publication date: April 11, 2024
    Inventors: Jinsuo LI, Di CHENG, Yunjun XU, Gaofeng ZUO, Jing HUANG, Xiaojing LI, Danfeng CHEN, Wanchao WEN, Yanping WANG, Zhengzhong YIN
  • Publication number: 20240112027
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing neural architecture search for machine learning models. In one aspect, a method comprises receiving training data for a machine learning, generating a plurality of candidate neural networks for performing the machine learning task, wherein each candidate neural network comprises a plurality of instances of a layer block composed of a plurality of layers, for each candidate neural network, selecting a respective type for each of the plurality of layers from a set of layer types that comprises, training the candidate neural network and evaluating performance scores for the trained candidate neural networks as applied to the machine learning task, and determining a final neural network for performing the machine learning task based at least on the performance scores for the candidate neural networks.
    Type: Application
    Filed: September 28, 2023
    Publication date: April 4, 2024
    Inventors: Yanqi Zhou, Yanping Huang, Yifeng Lu, Andrew M. Dai, Siamak Shakeri, Zhifeng Chen, James Laudon, Quoc V. Le, Da Huang, Nan Du, David Richard So, Daiyi Peng, Yingwei Cui, Jeffrey Adgate Dean, Chang Lan
  • Publication number: 20230259784
    Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.
    Type: Application
    Filed: April 27, 2023
    Publication date: August 17, 2023
    Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
  • Publication number: 20230222318
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing machine learning task on a network input to generate a network output. In one aspect, one of the systems includes an attention neural network configured to perform the machine learning task, the attention neural network including one or more attention layers, each attention layer comprising an attention sub-layer and a feed-forward sub-layer. Some or all of the attention layers have a feed-forward sub-layer that applies conditional computation to the inputs to the sub-layer.
    Type: Application
    Filed: June 30, 2021
    Publication date: July 13, 2023
    Inventors: Dmitry Lepikhin, Yanping Huang, Orhan Firat, Maxim Krikun, Dehao Chen, Noam M. Shazeer, HyoukJoong Lee, Yuanzhong Xu, Zhifeng Chen
  • Patent number: 11669744
    Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: June 6, 2023
    Assignee: Google LLC
    Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
  • Publication number: 20220237435
    Abstract: Systems and methods for routing in mixture-of-expert models. In some aspects of the technology, a transformer may have at least one Mixture-of-Experts (“MoE”) layer in each of its encoder and decoder, with the at least one MoE layer of the encoder having a learned gating function configured to route each token of a task to two or more selected expert feed-forward networks, and the at least one MoE layer of the decoder having a learned gating function configured to route each task to two or more selected expert feed-forward networks.
    Type: Application
    Filed: January 27, 2021
    Publication date: July 28, 2022
    Applicant: Google LLC
    Inventors: Yanping Huang, Dmitry Lepikhin, Maxim Krikun, Orhan Firat, Ankur Bapna, Thang Luong, Sneha Kudugunta
  • Publication number: 20220211662
    Abstract: Disclosed is an application of Epigallocatechin Gallate (EGCG) or a compound synthesized with it as a lead compound in combination with one or more tyrosine kinase inhibitors in preparation of a cancer treatment drug, herein the cancer is an Epidermal Growth Factor Receptor (EGFR) wild-type tumor. This combined use may significantly inhibit the growth of the EGFR wild-type tumor, and reduce the toxic and side effects of an anticancer drug.
    Type: Application
    Filed: May 7, 2019
    Publication date: July 7, 2022
    Inventors: Jun Sheng, Xuanjun Wang, Yanping Huang, Chengting Zi, Zemin Xiang, Yewei Huang, Yunli Zhao, Xiangdan Cuan, Huanhuan Xu, Rui Luo
  • Publication number: 20220121945
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.
    Type: Application
    Filed: January 3, 2022
    Publication date: April 21, 2022
    Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
  • Publication number: 20220085344
    Abstract: Disclosed are a button battery and a manufacturing method therefor, the button battery includes an upper cover plate, a pole, an insulating sleeve, a bottom shell, a battery winding core and a sealing ball. The upper cover plate is provided with a liquid injection hole and a stepped through hole with a small upper portion and a large lower portion; upper and lower ends of the pole are respectively a cylindrical portion and a head portion; the insulating sleeve is sheathed on the pole that penetrates through the stepped through hole, the head and cylindrical portions correspond to lower and upper end of the stepped through hole respectively; the battery winding core is arranged in an inner cavity formed by the upper cover plate and the bottom shell; and the sealing ball is arranged on the upper cover plate and seals the liquid injection hole.
    Type: Application
    Filed: October 29, 2020
    Publication date: March 17, 2022
    Inventors: Yundong Lu, Huaigai Kuang, Keshun Chen, Yanping Huang, Xiaobin Yang, Taichun Tang, Cheng Li, Shidong Huang
  • Patent number: 11232356
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: January 25, 2022
    Assignee: Google LLC
    Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
  • Publication number: 20220004879
    Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.
    Type: Application
    Filed: September 14, 2021
    Publication date: January 6, 2022
    Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
  • Patent number: 11144831
    Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: October 12, 2021
    Assignee: Google LLC
    Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
  • Publication number: 20210042620
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training giant neural networks. One of the methods includes obtaining data specifying a partitioning of the neural network into N composite layers that form a sequence of composite layers, wherein each composite layer comprises a distinct plurality of layers from the multiple network layers of the neural network; obtaining data assigning each of the N composite layers to one or more computing devices from a set of N computing devices; partitioning a mini-batch of training examples into a plurality of micro-batches; and training the neural network, comprising: performing a forward pass through the neural network until output activations have been computed for each micro-batch for a final composite layer in the sequence, and performing a backward pass through the neural network until output gradients have been computed for each micro-batch for the first composite layer in the sequence.
    Type: Application
    Filed: August 10, 2020
    Publication date: February 11, 2021
    Inventors: Zhifeng Chen, Yanping Huang, Youlong Cheng, HyoukJoong Lee, Dehao Chen, Jiquan Ngiam
  • Publication number: 20200320399
    Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.
    Type: Application
    Filed: June 19, 2020
    Publication date: October 8, 2020
    Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
  • Patent number: 10130650
    Abstract: Compositions and methods are provided for the treatment and/or prevention of an inflammatory and/or autoimmune disease or disorder.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: November 20, 2018
    Assignees: The Children's Hospital of Philadelphia, The Trustees of The University of Pennsylvania
    Inventors: Yanping Huang, Janis K. Burkhardt, Taku Kambayashi
  • Publication number: 20160339053
    Abstract: Compositions and methods are provided for the treatment and/or prevention of an inflammatory and/or autoimmune disease or disorder.
    Type: Application
    Filed: January 27, 2015
    Publication date: November 24, 2016
    Inventors: Yanping Huang, Janis K. Burkhardt, Taku Kambayashi
  • Patent number: 9378277
    Abstract: Disclosed are various embodiments for a search query segmentation application. Search queries are broken into segments. Each of the segments is assigned a taxonomy node from a catalog of items. Search results are generated as those items included in the taxonomy nodes assigned to the search query segments.
    Type: Grant
    Filed: February 8, 2013
    Date of Patent: June 28, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Lam Duy Nguyen, Nigel St. John Pope, Yanping Huang
  • Publication number: 20110160582
    Abstract: A wireless ultrasonic scanning system comprises an ultrasonic sensor, a motor, an ultrasonic transceiver, a high-speed data sampling module, a motor controller and a master control module. The ultrasonic sensor, which moves in accordance with control of the motor controller, is mounted on the motor. The ultrasonic transceiver activates the ultrasonic sensor and amplifies a received ultrasonic signal. The high-speed data sampling module wirelessly transmits radio-frequency ultrasonic data to the master control module. The master control module sets a scanning mode, initiates a scanning process, and wirelessly transmits control signals and control parameters to the high-speed data sampling module.
    Type: Application
    Filed: April 28, 2009
    Publication date: June 30, 2011
    Inventors: Yongping Zheng, Xin Chen, James Chungwai Cheung, Junfeng He, Yanping Huang, Zhengming Huang
  • Patent number: D913652
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: March 23, 2021
    Inventor: Yanping Huang