Patents by Inventor Quoc V. Le
Quoc V. Le has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240315217Abstract: There is provided a smart aquaculture growth and health monitoring system and method for monitoring the growth and health of an aquatic species present in an aquaculture growth habitat. The system comprises a georeferenced location beacon of the growth habitat, a sample container to sample water and aquatic species from the growth habitat and being configured to permit an electronic device having camera such as a smart phone to acquire digital visual data on said sample, a processor is communicatively linkable to the electronic device and optionally to a communications network, the processor being operable to receive the digital visual data; determine, based on the digital visual data, growth and/or health parameters of the aquatic species in the sample; and to retransmit data on the growth and/or health parameters of the aquatic species back to the electronic device.Type: ApplicationFiled: November 24, 2020Publication date: September 26, 2024Applicant: RYNAN TECHNOLOGIES PTE. LTD.Inventors: HOANG LUOM PHAM, QUOC TOAN TRAN, THANH TRIEU LE, QUOC CUONG HONG, HOANG PHUONG SON, MY T NGUYEN, NGOC TRANG DONG, DANH V HO, MINH TRUONG DOAN, TAN DAT BUI
-
Patent number: 12100391Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for speech recognition. One method includes obtaining an input acoustic sequence, the input acoustic sequence representing an utterance, and the input acoustic sequence comprising a respective acoustic feature representation at each of a first number of time steps; processing the input acoustic sequence using a first neural network to convert the input acoustic sequence into an alternative representation for the input acoustic sequence; processing the alternative representation for the input acoustic sequence using an attention-based Recurrent Neural Network (RNN) to generate, for each position in an output sequence order, a set of substring scores that includes a respective substring score for each substring in a set of substrings; and generating a sequence of substrings that represent a transcription of the utterance.Type: GrantFiled: October 7, 2021Date of Patent: September 24, 2024Assignee: Google LLCInventors: William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Noam M. Shazeer
-
Patent number: 12080055Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training an image representation neural network.Type: GrantFiled: March 17, 2022Date of Patent: September 3, 2024Assignee: Google LLCInventors: Tsung-Yi Lin, Barret Zoph, Ekin Dogus Cubuk, Golnaz Ghiasi, Quoc V. Le
-
Patent number: 12079695Abstract: A computer-implemented method of generating scale-permuted models can generate models having improved accuracy and reduced evaluation computational requirements. The method can include defining, by a computing system including one or more computing devices, a search space including a plurality of candidate permutations of a plurality of candidate feature blocks, each of the plurality of candidate feature blocks having a respective scale. The method can include performing, by the computing system, a plurality of search iterations by a search algorithm to select a scale-permuted model from the search space, the scale-permuted model based at least in part on a candidate permutation of the plurality of candidate permutations.Type: GrantFiled: October 1, 2020Date of Patent: September 3, 2024Assignee: GOOGLE LLCInventors: Xianzhi Du, Yin Cui, Tsung-Yi Lin, Quoc V. Le, Pengchong Jin, Mingxing Tan, Golnaz Ghiasi, Xiaodan Song
-
Publication number: 20240289395Abstract: Implementations relate to helping a large language model generate factual responses to prompts that request factual content is disclosed. The large language model may receive a prompt context, a plurality of encoded context passages as input. The large language model is trained to determine whether or not to utilize the encoded context passages in generating the response. Implementations also relate to different methods of fine-tuning the responses generated by the large language model through query refinements, response re-writes, and evaluation of factual accuracy.Type: ApplicationFiled: December 4, 2023Publication date: August 29, 2024Inventors: Hao Zhou, Shrestha Basu Mallick, Trevor Strohman, Patricia Luisa Romero Domingo, Amirhossein Kiani, Yu Du, Xinying Song, Heng-Tze Cheng, Quoc V. Le, Ed Huai-Hsin Chi, Christopher Jamie Maclean Hall
-
Publication number: 20240273410Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model. One of the methods includes obtaining a training data set for training a machine learning model, the training data set comprising a plurality of training inputs; determining a plurality of data augmentation policies, wherein each data augmentation policy defines a procedure for processing a training input to generate a transformed training input; for each data augmentation policy, training the machine learning model using the data augmentation policy; determining, for each data augmentation policy, a quality measure of the machine learning model that has been trained using the data augmentation policy; and selecting a final data augmentation policy based using the quality measures of the machine learning models.Type: ApplicationFiled: December 18, 2023Publication date: August 15, 2024Inventors: Jonathon Shlens, Quoc V. Le, Ekin Dogus Cubuk, Barret Zoph
-
Patent number: 12062227Abstract: Systems and methods of the present disclosure can include a computer-implemented method for efficient machine-learned model training. The method can include obtaining a plurality of training samples for a machine-learned model. The method can include, for one or more first training iterations, training, based at least in part on a first regularization magnitude configured to control a relative effect of one or more regularization techniques, the machine-learned model using one or more respective first training samples of the plurality of training samples. The method can include, for one or more second training iterations, training, based at least in part on a second regularization magnitude greater than the first regularization magnitude, the machine-learned model using one or more respective second training samples of the plurality of training samples.Type: GrantFiled: September 13, 2022Date of Patent: August 13, 2024Assignee: GOOGLE LLCInventors: Mingxing Tan, Quoc V. Le
-
Publication number: 20240249058Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a computer chip placement. One of the methods includes obtaining netlist data for a computer chip; and generating a computer chip placement, comprising placing a respective macro node at each time step in a sequence comprising a plurality of time steps, the placing comprising, for each time step: generating an input representation for the time step; processing the input representation using a node placement neural network having a plurality of network parameters, wherein the node placement neural network is configured to process the input representation in accordance with current values of the network parameters to generate a score distribution over a plurality of positions on the surface of the computer chip; and assigning the macro node to be placed at the time step to a position from the plurality of positions using the score distribution.Type: ApplicationFiled: December 22, 2023Publication date: July 25, 2024Inventors: Anna Darling Goldie, Azalia Mirhoseini, Ebrahim Songhori, Wenjie Jiang, Shen Wang, Roger David Carpenter, Young-Joon Lee, Mustafa Nazim Yazgan, Chian-min Richard Ho, Quoc V. Le, James Laudon, Jeffrey Adgate Dean, Kavya Srinivasa Setty, Omkar Pathak
-
Publication number: 20240242125Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model. In one aspect, a method includes: receiving training data for training a machine learning model to perform a particular machine learning task; determining multiple data augmentation policies, comprising, at each of multiple time steps: generating a current data augmentation policy based on quality measures of data augmentation policies generated at previous time steps; training a machine learning model on the training data using the current data augmentation policy; and determining a quality measure of the current data augmentation policy using the machine learning model after it has been trained using the current data augmentation policy; and selecting a final data augmentation policy based on the quality measures of the determined data augmentation policies.Type: ApplicationFiled: February 22, 2024Publication date: July 18, 2024Inventors: Vijay Vasudevan, Barret Zoph, Ekin Dogus Cubuk, Quoc V. Le
-
Publication number: 20240231667Abstract: Aspects of the disclosure are directed to a heterogeneous machine learning accelerator system with compute and memory nodes connected by high speed chip-to-chip interconnects. While existing remote/disaggregated memory may require memory expansion via remote processing units, aspects of the disclosure add memory nodes into machine learning accelerator clusters via the chip-to-chip interconnects without needing assistance from remote processing units to achieve higher performance, simpler software stack, and/or lower cost. The memory nodes may support prefetch and intelligent compression to enable the use of low cost memory without performance degradation.Type: ApplicationFiled: January 10, 2023Publication date: July 11, 2024Inventors: Sheng Li, Sridhar Lakshmanamurthy, Norman Paul Jouppi, Martin Guy Dixon, Daniel Stodolsky, Quoc V. Le, Liqun Cheng, Erik Karl Norden, Parthasarathy Ranganathan
-
Patent number: 12033038Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model. In one aspect, a method includes: receiving training data for training a machine learning model to perform a particular machine learning task; determining multiple data augmentation policies, comprising, at each of multiple time steps: generating a current data augmentation policy based on quality measures of data augmentation policies generated at previous time steps; training a machine learning model on the training data using the current data augmentation policy; and determining a quality measure of the current data augmentation policy using the machine learning model after it has been trained using the current data augmentation policy; and selecting a final data augmentation policy based on the quality measures of the determined data augmentation policies.Type: GrantFiled: October 1, 2020Date of Patent: July 9, 2024Assignee: Google LLCInventors: Vijay Vasudevan, Barret Zoph, Ekin Dogus Cubuk, Quoc V. Le
-
Publication number: 20240211764Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described.Type: ApplicationFiled: December 29, 2023Publication date: June 27, 2024Inventors: Mingxing Tan, Quoc V. Le
-
Publication number: 20240202519Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating document vector representations. One of the methods includes obtaining a new document; and determining a vector representation for the new document using a trained neural network system, wherein the trained neural network system has been trained to receive an input document and a sequence of words from the input document and to generate a respective word score for each word in a set of words, wherein each of the respective word scores represents a predicted likelihood that the corresponding word follows a last word in the sequence in the input document, and wherein determining the vector representation for the new document using the trained neural network system comprises iteratively providing each of the plurality of sequences of words to the trained neural network system to determine the vector representation for the new document using gradient descent.Type: ApplicationFiled: December 22, 2023Publication date: June 20, 2024Inventor: Quoc V. Le
-
Publication number: 20240160857Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: ApplicationFiled: January 25, 2024Publication date: May 16, 2024Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Publication number: 20240127791Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.Type: ApplicationFiled: November 21, 2023Publication date: April 18, 2024Inventors: Samuel Bengio, Yuxuan Wang, Zongheng Yang, Zhifeng Chen, Yonghui Wu, Ioannis Agiomyrgiannakis, Ron J. Weiss, Navdeep Jaitly, Ryan M. Rifkin, Robert Andrew James Clark, Quoc V. Le, Russell J. Ryan, Ying Xiao
-
Publication number: 20240127058Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network using a priority queue. One of the methods includes maintaining data identifying a set of K output sequences that were previously generated; selecting at least one of the output sequences from the set of output sequences; for each selected output sequence, determining a respective score; determining, for each selected sequence, a respective first update to the current values of the controller parameters; generating a batch of new output sequences using the controller neural network; obtaining a respective reward for each of the new output sequences; determining, from the new output sequences and the output sequences in the maintained data, the K output sequences that have the highest rewards; and modifying the maintained data.Type: ApplicationFiled: September 21, 2023Publication date: April 18, 2024Inventors: Mohammad Norouzi, Daniel Aaron Abolafia, Quoc V. Le
-
Patent number: 11954442Abstract: The present disclosure is directed to systems and methods for performing reading comprehension with machine learning. More specifically, the present disclosure is directed to a Neural Symbolic Reader (example implementations of which may be referred to as NeRd), which includes a reader to encode the passage and question, and a programmer to generate a program for multi-step reasoning. By using operators like span selection, the program can be executed over a natural language text passage to generate an answer to a natural language text question. NeRd is domain-agnostic such that the same neural architecture works for different domains. Further, NeRd is compositional such that complex programs can be generated by compositionally applying the symbolic operators.Type: GrantFiled: August 6, 2020Date of Patent: April 9, 2024Assignee: GOOGLE LLCInventors: Chen Liang, Wei Yu, Quoc V. Le, Xinyun Chen, Dengyong Zhou
-
Publication number: 20240112027Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing neural architecture search for machine learning models. In one aspect, a method comprises receiving training data for a machine learning, generating a plurality of candidate neural networks for performing the machine learning task, wherein each candidate neural network comprises a plurality of instances of a layer block composed of a plurality of layers, for each candidate neural network, selecting a respective type for each of the plurality of layers from a set of layer types that comprises, training the candidate neural network and evaluating performance scores for the trained candidate neural networks as applied to the machine learning task, and determining a final neural network for performing the machine learning task based at least on the performance scores for the candidate neural networks.Type: ApplicationFiled: September 28, 2023Publication date: April 4, 2024Inventors: Yanqi Zhou, Yanping Huang, Yifeng Lu, Andrew M. Dai, Siamak Shakeri, Zhifeng Chen, James Laudon, Quoc V. Le, Da Huang, Nan Du, David Richard So, Daiyi Peng, Yingwei Cui, Jeffrey Adgate Dean, Chang Lan
-
Patent number: 11922281Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model using teacher annealing.Type: GrantFiled: October 31, 2022Date of Patent: March 5, 2024Assignee: Google LLCInventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
-
Patent number: 11914969Abstract: Systems and methods are provided that train a machine-learned language encoding model through the use of a contrastive learning task. In particular, the present disclosure describes a contrastive learning task where the encoder learns to distinguish input tokens from plausible alternatives. In some implementations, on each training example the proposed method masks out some subset (e.g., 15%) of the original input tokens, replaces the masked tokens with samples from a “generator” (e.g., which may be a small masked language model), and then trains the encoder to predict whether each token comes from the original data or is a replacement produced by the generator.Type: GrantFiled: September 19, 2022Date of Patent: February 27, 2024Assignee: GOOGLE LLCInventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark