Patents by Inventor Mingxing Tan

Mingxing Tan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240378509
    Abstract: A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations.
    Type: Application
    Filed: July 25, 2024
    Publication date: November 14, 2024
    Inventors: Xianzhi Du, Yin Cui, Tsung-Yi Lin, Quoc V. Le, Pengchong Jin, Mingxing Tan, Golnaz Ghiasi, Xiaodan Song
  • Patent number: 12131244
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining an architecture for a task neural network that is configured to perform a particular machine learning task on a target set of hardware resources. When deployed on a target set of hardware, such as a collection of datacenter accelerators, the task neural network may be capable of performing the particular machine learning task with enhanced accuracy and speed.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: October 29, 2024
    Assignee: Google LLC
    Inventors: Sheng Li, Norman Paul Jouppi, Quoc V. Le, Mingxing Tan, Ruoming Pang, Liqun Cheng, Andrew Li
  • Publication number: 20240355109
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining one or more neural network architectures of a neural network for performing a video processing neural network task. In one aspect, a method comprises: at each of a plurality of iterations: selecting a parent neural network architecture from a set of neural network architectures; training a neural network having the parent neural network architecture to perform the video processing neural network task, comprising determining trained values of connection weight parameters of the parent neural network architecture; generating a new neural network architecture based at least in part on the trained values of the connection weight parameters of the parent neural network architecture; and adding the new neural network architecture to the set of neural network architectures.
    Type: Application
    Filed: June 18, 2024
    Publication date: October 24, 2024
    Inventors: Michael Sahngwon Ryoo, Anthony Jacob Piergiovanni, Mingxing Tan, Anelia Angelova
  • Publication number: 20240355101
    Abstract: Systems and methods of the present disclosure can include a computer-implemented method for efficient machine-learned model training. The method can include obtaining a plurality of training samples for a machine-learned model. The method can include, for one or more first training iterations, training, based at least in part on a first regularization magnitude configured to control a relative effect of one or more regularization techniques, the machine-learned model using one or more respective first training samples of the plurality of training samples. The method can include, for one or more second training iterations, training, based at least in part on a second regularization magnitude greater than the first regularization magnitude, the machine-learned model using one or more respective second training samples of the plurality of training samples.
    Type: Application
    Filed: July 1, 2024
    Publication date: October 24, 2024
    Inventors: Mingxing Tan, Quoc V. Le
  • Patent number: 12079695
    Abstract: A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: September 3, 2024
    Assignee: GOOGLE LLC
    Inventors: Xianzhi Du, Yin Cui, Tsung-Yi Lin, Quoc V. Le, Pengchong Jin, Mingxing Tan, Golnaz Ghiasi, Xiaodan Song
  • Publication number: 20240273336
    Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
    Type: Application
    Filed: February 1, 2024
    Publication date: August 15, 2024
    Inventors: Mingxing Tan, Quoc Le, Bo Chen, Vijay Vasudevan, Ruoming Pang
  • Patent number: 12062227
    Abstract: Systems and methods of the present disclosure can include a computer-implemented method for efficient machine-learned model training. The method can include obtaining a plurality of training samples for a machine-learned model. The method can include, for one or more first training iterations, training, based at least in part on a first regularization magnitude configured to control a relative effect of one or more regularization techniques, the machine-learned model using one or more respective first training samples of the plurality of training samples. The method can include, for one or more second training iterations, training, based at least in part on a second regularization magnitude greater than the first regularization magnitude, the machine-learned model using one or more respective second training samples of the plurality of training samples.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: August 13, 2024
    Assignee: GOOGLE LLC
    Inventors: Mingxing Tan, Quoc V. Le
  • Patent number: 12046025
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining one or more neural network architectures of a neural network for performing a video processing neural network task. In one aspect, a method comprises: at each of a plurality of iterations: selecting a parent neural network architecture from a set of neural network architectures; training a neural network having the parent neural network architecture to perform the video processing neural network task, comprising determining trained values of connection weight parameters of the parent neural network architecture; generating a new neural network architecture based at least in part on the trained values of the connection weight parameters of the parent neural network architecture; and adding the new neural network architecture to the set of neural network architectures.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: July 23, 2024
    Assignee: Google LLC
    Inventors: Michael Sahngwon Ryoo, Anthony Jacob Piergiovanni, Mingxing Tan, Anelia Angelova
  • Publication number: 20240232647
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model on training data. In one aspect, one of the methods include: obtaining a training data set comprising a plurality of training inputs; obtaining data defining an original search space of a plurality of candidate data augmentation policies; generating, from the original search space, a compact search space that has one or more global hyperparameters; and training the machine learning model on the training data using one or more final data augmentation policies generated from the compact search space.
    Type: Application
    Filed: October 23, 2023
    Publication date: July 11, 2024
    Inventors: Zhaoqi Leng, Guowang Li, Chenxi Liu, Pei Sun, Tong He, Dragomir Anguelov, Mingxing Tan
  • Publication number: 20240211764
    Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described.
    Type: Application
    Filed: December 29, 2023
    Publication date: June 27, 2024
    Inventors: Mingxing Tan, Quoc V. Le
  • Publication number: 20240161398
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output that characterizes a scene at a current time step. In one aspect, one of the systems include: a voxel neural network that generates a current early-stage feature representation of the current point cloud, a fusion subsystem that generates a current fused feature representation at the current time step; a backbone neural network that generates a current late-stage feature representation at the current time step, and an output neural network that generate an output that characterizes a scene at the current time step.
    Type: Application
    Filed: November 16, 2023
    Publication date: May 16, 2024
    Inventors: Tong He, Pei Sun, Zhaoqi Leng, Chenxi Liu, Mingxing Tan
  • Publication number: 20240135195
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model on training data. In one aspect, one of the methods include: obtaining a training data set comprising a plurality of training inputs; obtaining data defining an original search space of a plurality of candidate data augmentation policies; generating, from the original search space, a compact search space that has one or more global hyperparameters; and training the machine learning model on the training data using one or more final data augmentation policies generated from the compact search space.
    Type: Application
    Filed: October 22, 2023
    Publication date: April 25, 2024
    Inventors: Zhaoqi Leng, Guowang Li, Chenxi Liu, Pei Sun, Tong He, Dragomir Anguelov, Mingxing Tan
  • Patent number: 11928574
    Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: March 12, 2024
    Assignee: GOOGLE LLC
    Inventors: Mingxing Tan, Quoc Le, Bo Chen, Vijay Vasudevan, Ruoming Pang
  • Patent number: 11893491
    Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: February 6, 2024
    Assignee: Google LLC
    Inventors: Mingxing Tan, Quoc V. Le
  • Publication number: 20240005129
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for jointly determining neural network architectures and hardware accelerator architectures.
    Type: Application
    Filed: October 1, 2021
    Publication date: January 4, 2024
    Inventors: Yanqi Zhou, Amir Yazdanbakhsh, Berkin Akin, Daiyi Peng, Yuxiong Zhu, Mingxing Tan, Xuanyi Dong
  • Publication number: 20230359862
    Abstract: A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Inventors: Zihang Dai, Mingxing Tan, Quoc V. Le, Hanxiao Liu
  • Publication number: 20230351691
    Abstract: Methods, systems, and apparatus for processing point clouds using neural networks to perform a machine learning task. In one aspect, a system comprises one or more computers configured to obtain a set of point clouds captured by one or more sensors. Each point cloud includes a respective plurality of three-dimensional points. The one or more computers assign the three-dimensional points to respective voxels in a voxel grid, where the grid of voxels includes non-empty voxels to which one or more points are assigned and empty voxels to which no points are assigned. For each non-empty voxel, the one or more computers generate initial features based on the points that are assigned to the non-empty voxel. The one or more computers generate multi-scale features of the voxel grid, and the one or more computers generate an output for a point cloud processing task using the multi-scale features of the voxel grid.
    Type: Application
    Filed: March 13, 2023
    Publication date: November 2, 2023
    Inventors: Pei Sun, Mingxing Tan, Weiyue Wang, Fei Xia, Zhaoqi Leng, Dragomir Anguelov, Chenxi Liu
  • Patent number: 11755883
    Abstract: A computer-implemented method for performing computer vision with reduced computational cost and improved accuracy can include obtaining, by a computing system including one or more computing devices, input data comprising an input tensor having one or more dimensions, providing, by the computing system, the input data to a machine-learned convolutional attention network, the machine-learned convolutional attention network including two or more network stages, and, in response to providing the input data to the machine-learned convolutional attention network, receiving, by the computing system, a machine-learning prediction from the machine-learned convolutional attention network. The convolutional attention network can include at least one attention block, wherein the attention block includes a relative attention mechanism, the relative attention mechanism including the sum of a static convolution kernel with an adaptive attention matrix.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: September 12, 2023
    Assignee: GOOGLE LLC
    Inventors: Zihang Dai, Hanxiao Liu, Mingxing Tan, Quoc V. Le
  • Publication number: 20230244904
    Abstract: The present disclosure is directed to an automated neural architecture search approach for designing new neural network architectures such as, for example, resource-constrained mobile CNN models. In particular, the present disclosure provides systems and methods to perform neural architecture search using a novel factorized hierarchical search space that permits layer diversity throughout the network, thereby striking the right balance between flexibility and search space size. The resulting neural architectures are able to be run relatively faster and using relatively fewer computing resources (e.g., less processing power, less memory usage, less power consumption, etc.), all while remaining competitive with or even exceeding the performance (e.g., accuracy) of current state-of-the-art mobile-optimized models.
    Type: Application
    Filed: January 13, 2023
    Publication date: August 3, 2023
    Inventors: Mingxing Tan, Quoc Le, Bo Chen, Vijay Vasudevan, Ruoming Pang
  • Publication number: 20230154161
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using memory-optimized contrastive learning to train image encoder and text encoder neural networks.
    Type: Application
    Filed: November 16, 2022
    Publication date: May 18, 2023
    Inventors: Hieu Hy Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Wei Yu, Mingxing Tan, Quoc V. Le