Patents by Inventor Thang Minh Luong
Thang Minh Luong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240160857Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: ApplicationFiled: January 25, 2024Publication date: May 16, 2024Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20240112088Abstract: Systems and methods are provided for vector-quantized image modeling using vision transformers and improved codebook handling. In particular, the present disclosure provides a Vector-quantized Image Modeling (VIM) approach that involves pretraining a machine learning model (e.g., Transformer model) to predict rasterized image tokens autoregressively. The discrete image tokens can be encoded from a learned Vision-Transformer-based VQGAN (example implementations of which can be referred to as ViT-VQGAN). The present disclosure proposes multiple improvements over vanilla VQGAN from architecture to codebook learning, yielding better efficiency and reconstruction fidelity. The improved ViT-VQGAN further improves vector-quantized image modeling tasks, including unconditional image generation, conditioned image generation (e.g., class-conditioned image generation), and unsupervised representation learning.Type: ApplicationFiled: November 27, 2023Publication date: April 4, 2024Inventors: Jiahui Yu, Xin Li, Han Zhang, Vijay Vasudevan, Alexander Yeong-Shiuh Ku, Jason Michael Baldridge, Yuanzhong Xu, Jing Yu Koh, Thang Minh Luong, Gunjan Baid, Zirui Wang, Yonghui Wu
-
Patent number: 11922281Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model using teacher annealing.Type: GrantFiled: October 31, 2022Date of Patent: March 5, 2024Assignee: Google LLCInventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Patent number: 11914969Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: GrantFiled: September 19, 2022Date of Patent: February 27, 2024Assignee: GOOGLE LLCInventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20230049747Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model using teacher annealing.Type: ApplicationFiled: October 31, 2022Publication date: February 16, 2023Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20230015737Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: ApplicationFiled: September 19, 2022Publication date: January 19, 2023Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20220383206Abstract: Systems and methods can leverage task-specific unlabeled data to improve downstream performance in data-constrained scenarios. Given a target task, a first technique proposed herein, which can be referred to as task augmentation, uses unlabeled text from the target domain to synthesize a large amount of in-domain training data for an auxiliary task A second technique provides a self-training algorithm, where a model learns to improve itself using its predictions on unlabeled examples.Type: ApplicationFiled: May 27, 2022Publication date: December 1, 2022Inventors: Thang Minh Luong, Tu Thanh Vu, Quoc V. Le, Grady Hayes Simon
-
Patent number: 11501168Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for structuring and training a recurrent neural network. This describes a technique that improves the ability to capture long term dependencies in recurrent neural networks by adding an unsupervised auxiliary loss at one or more anchor points to the original objective. This auxiliary loss forces the network to either reconstruct previous events or predict next events in a sequence, making truncated backpropagation feasible for long sequences and also improving full backpropagation through time.Type: GrantFiled: February 11, 2019Date of Patent: November 15, 2022Assignee: Google LLCInventors: Andrew M. Dai, Quoc V. Le, Hoang Trieu Trinh, Thang Minh Luong
-
Patent number: 11488067Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model using teacher annealing.Type: GrantFiled: May 11, 2020Date of Patent: November 1, 2022Assignee: Google LLCInventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Patent number: 11481609Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for incorporating a computationally efficient expressive output layer in a neural network. The output layer is configured to map a received hidden state to a probability distribution over a vocabulary of possible outputs by generating, from the hidden state, a respective context embedding for each of a plurality of gates; for each of the possible outputs in the vocabulary, computing a gated logit for the possible output by applying an output embedding for the possible output to the weighed sum; and generating the probability distribution over the vocabulary of possible outputs by applying a softmax to the gated logits for the possible outputs in the vocabulary.Type: GrantFiled: May 13, 2020Date of Patent: October 25, 2022Assignee: Google LLCInventors: Thang Minh Luong, Quoc V. Le, Zhilin Yang
-
Patent number: 11449684Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: GrantFiled: September 21, 2020Date of Patent: September 20, 2022Assignee: GOOGLE LLCInventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20220215209Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a IT machine learning model. One of the methods includes receiving training data comprising a plurality of unlabeled training inputs and a plurality of labeled training inputs; generating augmented training data, comprising generating, for each of the plurality of unlabeled training inputs, a respective augmented training input by applying a data augmentation technique to the unlabeled training input; and training the machine learning model on the augmented training data. In particular, but not exclusively, the model may be trained for perceptual tasks (e.g. tasks relating to vision or speech).Type: ApplicationFiled: April 24, 2020Publication date: July 7, 2022Inventors: Thang Minh Luong, Quoc V. Le, Qizhe Xie, Zihang Dai
-
Publication number: 20220083840Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, used to implement a self-training technique for generating neural network (NN) models. A first model is generated in response to training a first NN using labeled data. A respective pseudo label is generated for each item of unlabeled data when items of unlabeled data are processed using the first model. A second NN is used to process each item of a combined dataset to train the second NN. The combined dataset includes items of labeled data and a corresponding item for each respective pseudo label. Attributes of items in the combined dataset are modified to inject noise into the combined dataset when the second NN is trained. A second model is generated after the second NN is trained by processing items in the combined dataset, including processing items that represent the noise injected into the combined dataset.Type: ApplicationFiled: September 11, 2020Publication date: March 17, 2022Inventors: Thang Minh Luong, Quoc V. Le, Qizhe Xie
-
Publication number: 20220067304Abstract: Systems and methods are provided for training and using energy-based language models such as cloze language models. In particular, one aspect of the present disclosure is directed to an energy-based cloze language model for representation learning over text. In some instances, the models provided herein can be referred to as the “Electric” model. Similar to the BERT model, example models proposed herein can be a conditional generative model of tokens given their contexts. However, example models proposed herein do not mask text or output a full distribution over tokens that could occur in a context. Instead, the example proposed models assign a scalar energy score to each input token. Another aspect of the present disclosure provides techniques to train the proposed models to assign low energies to data tokens and high energies to other ones using an algorithm based on noise-contrastive estimation.Type: ApplicationFiled: August 27, 2021Publication date: March 3, 2022Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Patent number: 11080589Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a target sequence including a respective output at each of multiple output time steps from respective encoded representations of inputs in an input sequence. The method includes, for each output time step, starting from the position, in the input order, of the encoded representation that was selected as a preceding context vector at a preceding output time step, traversing the encoded representations until an encoded representation is selected as a current context vector at the output time step. A decoder neural network processes the current context vector and a preceding output at the preceding output time step to generate a respective output score for each possible output and to update the hidden state of the decoder recurrent neural network. An output is selected for the output time step using the output scores.Type: GrantFiled: July 8, 2019Date of Patent: August 3, 2021Assignee: Google LLCInventors: Ron J. Weiss, Thang Minh Luong, Peter J. Liu, Colin Abraham Raffel, Douglas Eck
-
Publication number: 20210089724Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: ApplicationFiled: September 21, 2020Publication date: March 25, 2021Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20200364543Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for incorporating a computationally efficient expressive output layer in a neural network. The output layer is configured to map a received hidden state to a probability distribution over a vocabulary of possible outputs by generating, from the hidden state, a respective context embedding for each of a plurality of gates; for each of the possible outputs in the vocabulary, computing a gated logit for the possible output by applying an output embedding for the possible output to the weighed sum; and generating the probability distribution over the vocabulary of possible outputs by applying a softmax to the gated logits for the possible outputs in the vocabulary.Type: ApplicationFiled: May 13, 2020Publication date: November 19, 2020Inventors: Thang Minh Luong, Quoc V. Le, Zhilin Yang
-
Publication number: 20200364617Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model using teacher annealing.Type: ApplicationFiled: May 11, 2020Publication date: November 19, 2020Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20190332919Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a target sequence including a respective output at each of multiple output time steps from respective encoded representations of inputs in an input sequence. The method includes, for each output time step, starting from the position, in the input order, of the encoded representation that was selected as a preceding context vector at a preceding output time step, traversing the encoded representations until an encoded representation is selected as a current context vector at the output time step. A decoder neural network processes the current context vector and a preceding output at the preceding output time step to generate a respective output score for each possible output and to update the hidden state of the decoder recurrent neural network. An output is selected for the output time step using the output scores.Type: ApplicationFiled: July 8, 2019Publication date: October 31, 2019Inventors: Ron J. Weiss, Thang Minh Luong, Peter J. Liu, Colin Abraham Raffel, Douglas Eck
-
Publication number: 20190251449Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for structuring and training a recurrent neural network. This describes a technique that improves the ability to capture long term dependencies in recurrent neural networks by adding an unsupervised auxiliary loss at one or more anchor points to the original objective. This auxiliary loss forces the network to either reconstruct previous events or predict next events in a sequence, making truncated backpropagation feasible for long sequences and also improving full backpropagation through time.Type: ApplicationFiled: February 11, 2019Publication date: August 15, 2019Inventors: Andrew M. Dai, Quoc V. Le, Hoang Trieu Trinh, Thang Minh Luong