Patents by Inventor An V. Le

An V. Le has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10936828
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural translation systems with rare word processing. One of the methods is a method training a neural network translation system to track the source in source sentences of unknown words in target sentences, in a source language and a target language, respectively and includes deriving alignment data from a parallel corpus, the alignment data identifying, in each pair of source and target language sentences in the parallel corpus, aligned source and target words; annotating the sentences in the parallel corpus according to the alignment data and a rare word model to generate a training dataset of paired source and target language sentences; and training a neural network translation model on the training dataset.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: March 2, 2021
    Assignee: Google LLC
    Inventors: Quoc V. Le, Minh-Thang Luong, Ilya Sutskever, Oriol Vinyals, Wojciech Zaremba
  • Patent number: 10919805
    Abstract: The disclosure features methods of forming composite materials, and the composite materials formed by such methods. The methods include forming a mixture that includes a binder material and a filler material, and applying a pressure of at least 10 MPa to the mixture to form the composite material, where the composite material thus formed includes less than 9% by weight of the binder material, less than 18% by volume of the binder material, or both, and has a flexural strength of at least 3 MPa.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: February 16, 2021
    Assignee: The Regents of the University of California
    Inventors: Yu Qiao, Tze Han Chen, Anh V. Le
  • Patent number: 10922611
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining update rules for training neural networks. One of the methods includes generating, using a controller neural network, a batch of output sequences, each output sequence in the batch defining a respective update rule; for each output sequence in the batch: training a respective instance of a child neural network using the update rule defined by the output sequence; evaluating a performance of the trained instance of the child neural network on the particular neural network task to determine a performance metric for the trained instance of the child neural network on the particular neural network task; and using the performance metrics for the trained instances of the child neural network to adjust the current values of the controller parameters of the controller neural network.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: February 16, 2021
    Assignee: Google LLC
    Inventors: Irwan Bello, Barret Zoph, Vijay Vasudevan, Quoc V. Le
  • Patent number: 10917307
    Abstract: Middleboxes include a processor configured to determine a degree of mismatch between a sequence number in a first connection between the middlebox and a client device and a sequence number in a second connection between the middlebox and a server device. A network control module is configured to delay acknowledgment signals from the middlebox on a connection to decrease the degree of mismatch between sequence numbers and to establish a direct connection between the client device and the server device without mediation by the middlebox upon a determination that the degree of mismatch between sequence numbers is zero.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Dakshi Agrawal, Thai V. Le, Erich M. Nahum, Vasileios Pappas
  • Patent number: 10909457
    Abstract: A method for determining a final architecture for a neural network to perform a particular machine learning task is described.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: February 2, 2021
    Assignee: Google LLC
    Inventors: Mingxing Tan, Quoc V. Le
  • Publication number: 20210019658
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model. In one aspect, a method includes: receiving training data for training a machine learning model to perform a particular machine learning task; determining multiple data augmentation policies, comprising, at each of multiple time steps: generating a current data augmentation policy based on quality measures of data augmentation policies generated at previous time steps; training a machine learning model on the training data using the current data augmentation policy; and determining a quality measure of the current data augmentation policy using the machine learning model after it has been trained using the current data augmentation policy; and selecting a final data augmentation policy based on the quality measures of the determined data augmentation policies.
    Type: Application
    Filed: October 1, 2020
    Publication date: January 21, 2021
    Inventors: Vijay Vasudevan, Barret Zoph, Ekin Dogus Cubuk, Quoc V. Le
  • Patent number: 10896767
    Abstract: A plurality of signal anomalies are identified in a number of tubes in a steam generator. Since the geometry of the steam generator is known, the location of each signal anomaly along each tube is converted into a location within the interior of the steam generator. If a plurality of signal anomalies are at locations within the steam generator that are within a predetermined proximity of one another, such a spatial confluence of signal anomalies is determined to correspond with a loose part situated within the steam generator. Additional methodologies can be employed to confirm the existence of the loose part. Historic tube sheet transition signal data can be retrieved and subtracted from present signals in order to enable the system to ignore the relatively strong eddy current sensor signal of a tube sheet which would mask the relatively weak signal from a loose part at the tube sheet transition.
    Type: Grant
    Filed: January 4, 2012
    Date of Patent: January 19, 2021
    Assignee: Westinghouse Electric Company LLC
    Inventor: Qui V. Le
  • Patent number: 10883932
    Abstract: An FI having an in-situ particle detector and a method for particle detection therein are provided. In one aspect, the FI includes a fan, a substrate support, a particle detector, and an exhaust outlet. The fan, substrate support, and particle detector are arranged such that, in operation, the fan directs air towards the exhaust outlet and over a substrate on the substrate support to create laminar flow. The particle detector, positioned downstream from the substrate support and upstream from the exhaust outlet, analyzes the air and detects particle concentration before the particles are exhausted. The collected particle detection data may be combined with data from other sensors in the FI and used to identify the source of particle contamination. The particle detector may also be incorporated into other system components, including but not limited to, a load-lock or buffer chamber to detect particle concentration therein.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: January 5, 2021
    Assignee: Applied Materials, Inc.
    Inventors: Lin Zhang, Xuesong Lu, Andrew V. Le, Fa Ji, Jang Seok Oh, Patrick L. Smith, Shawyon Jafari, Ralph Peter Antonio
  • Publication number: 20200410396
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for performing machine learning tasks. One method includes receiving (i) a model input, and (ii) data identifying a first machine learning task to be performed on the model input to generate a first type of model output for the model input; augmenting the model input with an identifier for the first machine learning task to generate an augmented model input; and processing the augmented model input using a machine learning model, wherein the machine learning model has been trained on training data to perform a plurality of machine learning tasks including the first machine learning task, and wherein the machine learning model has been configured through training to process the augmented model input to generate a machine learning model output of the first type for the model input.
    Type: Application
    Filed: July 13, 2020
    Publication date: December 31, 2020
    Inventors: Zhifeng Chen, Michael Schuster, Melvin Jose Johnson Premkumar, Yonghui Wu, Quoc V. Le, Maxim Krikun, Thorsten Brants
  • Publication number: 20200401899
    Abstract: A method for receiving training data for training a neural network to perform a machine learning task and for searching for, using the training data, an optimized neural network architecture for performing the machine learning task is described. Searching for the optimized neural network architecture includes: maintaining population data; maintaining threshold data; and repeatedly performing the following operations: selecting one or more candidate architectures from the population data; generating a new architecture from the one or more selected candidate architectures; for the new architecture: training a neural network having the new architecture until termination criteria for the training are satisfied; and determining a final measure of fitness of the neural network having the new architecture after the training; and adding data defining the new architecture and the final measure of fitness for the neural network having the new architecture to the population data.
    Type: Application
    Filed: June 20, 2019
    Publication date: December 24, 2020
    Inventors: David Martin Dohan, David Richard So, Chen Liang, Quoc V. Le
  • Patent number: 10870806
    Abstract: Systems and methods are provided for upgrading a mixture of catalytic slurry oil and coker bottoms by hydroprocessing. Optionally, the upgrading can further include deasphalting the mixture of catalytic slurry oil and coker bottoms to form a deasphalted oil and a deasphalter residue or rock fraction. The mixture of catalytic slurry oil and coker bottoms and/or the deasphalted oil can then be hydroprocessed to form an upgraded effluent that includes fuels boiling range products. Optionally, in some aspects where the feed mixture is deasphalted prior to hydroprocessing, the feed mixture can further include a portion of a (sour) vacuum resid.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: December 22, 2020
    Assignee: ExxonMobil Research and Engineering Company
    Inventors: Stephen H. Brown, Brian A. Cunningham, Randolph J. Smiley, Samia Ilias, Brenda A. Raich, Tien V. Le
  • Publication number: 20200372076
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining, for each of one or more categorical features, a respective vocabulary of categorical feature values of the categorical feature that should be active during processing of inputs by a machine learning model. In one aspect, a method comprises: generating a batch of output sequences, each output sequence in the batch specifying, for each of the categorical features, a respective vocabulary of categorical feature values of the categorical feature that should be active; for each output sequence in the batch, determining a performance metric of the machine learning model on a machine learning task after the machine learning model has been trained to perform the machine learning task with only the respective vocabulary of categorical feature values of each categorical feature specified by the output sequence being active.
    Type: Application
    Filed: May 20, 2020
    Publication date: November 26, 2020
    Inventors: Cong Li, Jay Adams, Manas Joglekar, Pranav Khaitan, Quoc V. Le, Mei Chen
  • Publication number: 20200364543
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for incorporating a computationally efficient expressive output layer in a neural network. The output layer is configured to map a received hidden state to a probability distribution over a vocabulary of possible outputs by generating, from the hidden state, a respective context embedding for each of a plurality of gates; for each of the possible outputs in the vocabulary, computing a gated logit for the possible output by applying an output embedding for the possible output to the weighed sum; and generating the probability distribution over the vocabulary of possible outputs by applying a softmax to the gated logits for the possible outputs in the vocabulary.
    Type: Application
    Filed: May 13, 2020
    Publication date: November 19, 2020
    Inventors: Thang Minh Luong, Quoc V. Le, Zhilin Yang
  • Publication number: 20200364540
    Abstract: Generally, the present disclosure is directed to novel machine-learned classification models that operate with hard attention to make discrete attention actions. The present disclosure also provides a self-supervised pre-training procedure that initializes the model to a state with more frequent rewards. Given only the ground truth classification labels for a set of training inputs (e.g., images), the proposed models are able to learn a policy over discrete attention locations that identifies certain portions of the input (e.g., patches of the images) that are relevant to the classification. In such fashion, the models are able to provide high accuracy classifications while also providing an explicit and interpretable basis for the decision.
    Type: Application
    Filed: May 13, 2020
    Publication date: November 19, 2020
    Inventors: Gamaleldin Elsayed, Simon Kornblith, Quoc V. Le
  • Publication number: 20200364617
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a machine learning model using teacher annealing.
    Type: Application
    Filed: May 11, 2020
    Publication date: November 19, 2020
    Inventors: Thang Minh Luong, Quoc V. Le, Kevin Stefan Clark
  • Patent number: 10817805
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model. In one aspect, a method includes: receiving training data for training a machine learning model to perform a particular machine learning task; determining multiple data augmentation policies, comprising, at each of multiple time steps: generating a current data augmentation policy based on quality measures of data augmentation policies generated at previous time steps; training a machine learning model on the training data using the current data augmentation policy; and determining a quality measure of the current data augmentation policy using the machine learning model after it has been trained using the current data augmentation policy; and selecting a final data augmentation policy based on the quality measures of the determined data augmentation policies.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: October 27, 2020
    Assignee: Google LLC
    Inventors: Vijay Vasudevan, Barret Zoph, Ekin Dogus Cubuk, Quoc V. Le
  • Patent number: 10803380
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating document vector representations. One of the methods includes obtaining a new document; selecting a plurality of new document word sets; and determining a vector representation for the new document using a trained neural network system, wherein the trained neural network system comprises: a document embedding layer and a classifier, and wherein determining the vector representation for the new document using the trained neural network system comprises iteratively providing each of the plurality of new document word sets to the trained neural network system to determine the vector representation for the new document using gradient descent.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: October 13, 2020
    Assignee: Google LLC
    Inventors: Andrew M. Dai, Quoc V. Le, Gregory Sean Corrado
  • Patent number: 10800753
    Abstract: Tetrahydrothiophene and related heterocyclic analogs and related methods for GABA aminotransferase inactivation.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: October 13, 2020
    Assignee: Northwestern University
    Inventors: Richard B. Silverman, Hoang V. Le, Dustin D. Hawker
  • Publication number: 20200320399
    Abstract: A method for receiving training data for training a neural network (NN) to perform a machine learning (ML) task and for determining, using the training data, an optimized NN architecture for performing the ML task is described. Determining the optimized NN architecture includes: maintaining population data comprising, for each candidate architecture in a population of candidate architectures, (i) data defining the candidate architecture, and (ii) data specifying how recently a neural network having the candidate architecture has been trained while determining the optimized neural network architecture; and repeatedly performing multiple operations using each of a plurality of worker computing units to generate a new candidate architecture based on a selected candidate architecture having the best measure of fitness, adding the new candidate architecture to the population, and removing from the population the candidate architecture that was trained least recently.
    Type: Application
    Filed: June 19, 2020
    Publication date: October 8, 2020
    Inventors: Yanping Huang, Alok Aggarwal, Quoc V. Le, Esteban Alberto Real
  • Patent number: 10785288
    Abstract: A method includes configuring worker services to operate in a stateless manner and providing support services that enable the worker services to operate in the stateless manner. The support services include (i) a management service for providing notifications of server removal and addition, (ii) a state maintenance service for maintaining state information in a central location, and (iii) a load balancer service for distributing requests among worker services. The method includes altering a number of servers allocated to at least one worker service, responsive to a notification from the management service. A private protocol is used between the worker services and load balancer service (a) to send, from the worker services to the load balancer service, a respective pointer to the state information associated with the requests, and (b) to include the respective pointer in the requests when any of the requests are forwarded to any worker service.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: September 22, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Seraphin B. Calo, Douglas M. Freimuth, Franck V. Le, Erich M. Nahum, Maroun Touma, Dinesh C. Verma