Patents by Inventor An V. Le

An V. Le has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200118554
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for speech recognition. One method includes obtaining an input acoustic sequence, the input acoustic sequence representing an utterance, and the input acoustic sequence comprising a respective acoustic feature representation at each of a first number of time steps, processing the input acoustic sequence using a first neural network to convert the input acoustic sequence into an alternative representation for the input acoustic sequence, processing the alternative representation for the input acoustic sequence using an attention-based Recurrent Neural Network (RNN) to generate, for each position in an output sequence order, a set of substring scores that includes a respective substring score for each substring in a set of substrings; and generating a sequence of substrings that represent a transcription of the utterance.
    Type: Application
    Filed: December 13, 2019
    Publication date: April 16, 2020
    Applicant: Google LLC
    Inventors: William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Noam M. Shazeer
  • Publication number: 20200104710
    Abstract: A method for training a target neural network on a target machine learning task is described.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 2, 2020
    Inventors: Vijay Vasudevan, Ruoming Pang, Quoc V. Le, Daiyi Peng, Jiquan Ngiam, Simon Kornblith
  • Publication number: 20200098350
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.
    Type: Application
    Filed: November 26, 2019
    Publication date: March 26, 2020
    Inventors: Samuel Bengio, Yuxuan Wang, Zongheng Yang, Zhifeng Chen, Yonghui Wu, Ioannis Agiomyrgiannakis, Ron J. Weiss, Navdeep Jaitly, Ryan M. Rifkin, Robert Andrew James Clark, Quoc V. Le, Russell J. Ryan, Ying Xiao
  • Patent number: 10599770
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating author vectors. One of the methods includes obtaining a set of sequences of words, the set of sequences of words comprising a plurality of first sequences of words and, for each first sequence of words, a respective second sequence of words that follows the first sequence of words, wherein each first sequence of words and each second sequence of words has been classified as being authored by a first author; and training a neural network system on the first sequences and the second sequences to determine an author vector for the first author, wherein the author vector characterizes the first author.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: March 24, 2020
    Assignee: Google LLC
    Inventors: Brian Patrick Strope, Quoc V. Le
  • Patent number: 10582946
    Abstract: A hydrodynamic catheter includes a catheter body with a catheter lumen and an infusion tube extending within the catheter body, the infusion tube configured for coupling with a fluid source near the catheter proximal portion. An inflow orifice and an outflow orifice are positioned at locations along a catheter body perimeter. A fluid jet emanator is in fluid communication with the infusion tube, where the fluid jet emanator includes one or more jet orifices configured to direct one or more fluid jets through the catheter lumen from near the inflow orifice toward the outflow orifice. A pivot cylinder located along the catheter body perimeter is positioned distal relative to one or more of the fluid jet emanator, the inflow orifice, or the outflow orifice.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: March 10, 2020
    Assignee: BOSTON SCIENTIFIC LIMITED
    Inventors: Michael J. Bonnette, Hieu V. Le
  • Patent number: 10580626
    Abstract: Embodiments described herein generally relate to a plasma processing chamber and a detection apparatus for arcing events. In one embodiment, an arcing detection apparatus is disclosed herein. The arcing detection apparatus comprises a probe, a detection circuit, and a data log system. The probe positioned partially exposed to an interior volume of a plasma processing chamber. The detection circuit is configured to receive an analog signal from the probe and output an output signal scaling events present in the analog signal. The data log system is communicatively coupled to receive the output signal from the detection circuit. The data log system is configured to track arcing events occurring in the interior volume.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: March 3, 2020
    Assignee: APPLIED MATERIALS, INC.
    Inventors: Lin Zhang, Rongping Wang, Jian J. Chen, Michael S. Cox, Andrew V. Le
  • Publication number: 20200065689
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining neural network architectures. One of the methods includes generating, using a controller neural network having controller parameters and in accordance with current values of the controller parameters, a batch of output sequences. The method includes, for each output sequence in the batch: generating an instance of a child convolutional neural network (CNN) that includes multiple instances of a first convolutional cell having an architecture defined by the output sequence; training the instance of the child CNN to perform an image processing task; and evaluating a performance of the trained instance of the child CNN on the task to determine a performance metric for the trained instance of the child CNN; and using the performance metrics for the trained instances of the child CNN to adjust current values of the controller parameters of the controller neural network.
    Type: Application
    Filed: November 5, 2019
    Publication date: February 27, 2020
    Inventors: Vijay Vasudevan, Barret Zoph, Jonathon Shlens, Quoc V. Le
  • Patent number: 10573293
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating speech from text. One of the systems includes one or more computers and one or more storage devices storing instructions that when executed by one or more computers cause the one or more computers to implement: a sequence-to-sequence recurrent neural network configured to: receive a sequence of characters in a particular natural language, and process the sequence of characters to generate a spectrogram of a verbal utterance of the sequence of characters in the particular natural language; and a subsystem configured to: receive the sequence of characters in the particular natural language, and provide the sequence of characters as input to the sequence-to-sequence recurrent neural network to obtain as output the spectrogram of the verbal utterance of the sequence of characters in the particular natural language.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: February 25, 2020
    Assignee: Google LLC
    Inventors: Samuel Bengio, Yuxuan Wang, Zongheng Yang, Zhifeng Chen, Yonghui Wu, Ioannis Agiomyrgiannakis, Ron J. Weiss, Navdeep Jaitly, Ryan M. Rifkin, Robert Andrew James Clark, Quoc V. Le, Russell J. Ryan, Ying Xiao
  • Publication number: 20200057941
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining update rules for training neural networks. One of the methods includes generating, using a controller neural network, a batch of output sequences, each output sequence in the batch defining a respective update rule; for each output sequence in the batch: training a respective instance of a child neural network using the update rule defined by the output sequence; evaluating a performance of the trained instance of the child neural network on the particular neural network task to determine a performance metric for the trained instance of the child neural network on the particular neural network task; and using the performance metrics for the trained instances of the child neural network to adjust the current values of the controller parameters of the controller neural network.
    Type: Application
    Filed: October 24, 2019
    Publication date: February 20, 2020
    Inventors: Irwan Bello, Barret Zoph, Vijay Vasudevan, Quoc V. Le
  • Patent number: 10559300
    Abstract: A system can be configured to perform tasks such as converting recorded speech to a sequence of phonemes that represent the speech, converting an input sequence of graphemes into a target sequence of phonemes, translating an input sequence of words in one language into a corresponding sequence of words in another language, or predicting a target sequence of words that follow an input sequence of words in a language (e.g., a language model). In a speech recognizer, the RNN system may be used to convert speech to a target sequence of phonemes in real-time so that a transcription of the speech can be generated and presented to a user, even before the user has completed uttering the entire speech input.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: February 11, 2020
    Assignee: Google LLC
    Inventors: Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Samuel Bengio, Ilya Sutskever
  • Publication number: 20200034435
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural machine translation. One of the systems includes an encoder neural network comprising: an input forward long short-term memory (LSTM) layer configured to process each input token in the input sequence in a forward order to generate a respective forward representation of each input token, an input backward LSTM layer configured to process each input token in a backward order to generate a respective backward representation of each input token and a plurality of hidden LSTM layers configured to process a respective combined representation of each of the input tokens in the forward order to generate a respective encoded representation of each of the input tokens; and a decoder subsystem configured to receive the respective encoded representations and to process the encoded representations to generate an output sequence.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 30, 2020
    Inventors: Mohammad Norouzi, Zhifeng Chen, Yonghui Wu, Michael Schuster, Quoc V. Le
  • Publication number: 20200026765
    Abstract: A computer-implemented method for training a neural network that is configured to generate a score distribution over a set of multiple output positions. The neural network is configured to process a network input to generate a respective score distribution for each of a plurality of output positions including a respective score for each token in a predetermined set of tokens that includes n-grams of multiple different sizes. Example methods described herein provide trained neural networks which produce results with improved accuracy compared to the state of the art, e.g. translations that are more accurate compared to the state of the art, or more accurate speech recognition compared to the state of the art.
    Type: Application
    Filed: October 3, 2017
    Publication date: January 23, 2020
    Inventors: Navdeep Jaitly, Yu Zhang, Quoc V. Le, William Chan
  • Patent number: 10540962
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for speech recognition. One method includes obtaining an input acoustic sequence, the input acoustic sequence representing an utterance, and the input acoustic sequence comprising a respective acoustic feature representation at each of a first number of time steps; processing the input acoustic sequence using a first neural network to convert the input acoustic sequence into an alternative representation for the input acoustic sequence; processing the alternative representation for the input acoustic sequence using an attention-based Recurrent Neural Network (RNN) to generate, for each position in an output sequence order, a set of substring scores that includes a respective substring score for each substring in a set of substrings; and generating a sequence of substrings that represent a transcription of the utterance.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: January 21, 2020
    Inventors: William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Noam M. Shazeer
  • Publication number: 20200012905
    Abstract: Systems and techniques are disclosed for labeling objects within an image. The objects may be labeled by selecting an option from a plurality of options such that each option is a potential label for the object. An option may have an option score associated with. Additionally, a relation score may be calculated for a first option and a second option corresponding to a second object in an image. The relation score may be based on a frequency, probability, or observance corresponding to the co-occurrence of text associated with the first option and the second option in a text corpus such as the World Wide Web. An option may be selected as a label for an object based on a global score calculated based at least on an option score and relation score associated with the option.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 9, 2020
    Inventors: Samuel Bengio, Jeffrey Adgate Dean, Quoc V. Le, Jonathon Shlens, Yoram Singer
  • Patent number: 10528866
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a document classification neural network. One of the methods includes training an autoencoder neural network to autoencode input documents, wherein the autoencoder neural network comprises the one or more LSTM neural network layers and an autoencoder output layer, and wherein training the autoencoder neural network comprises determining pre-trained values of the parameters of the one or more LSTM neural network layers from initial values of the parameters of the one or more LSTM neural network layers; and training the document classification neural network on a plurality of training documents to determine trained values of the parameters of the one or more LSTM neural network layers from the pre-trained values of the parameters of the one or more LSTM neural network layers.
    Type: Grant
    Filed: September 6, 2016
    Date of Patent: January 7, 2020
    Assignee: Google LLC
    Inventors: Andrew M. Dai, Quoc V. Le
  • Patent number: 10521729
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining neural network architectures. One of the methods includes generating, using a controller neural network having controller parameters and in accordance with current values of the controller parameters, a batch of output sequences. The method includes, for each output sequence in the batch: generating an instance of a child convolutional neural network (CNN) that includes multiple instances of a first convolutional cell having an architecture defined by the output sequence; training the instance of the child CNN to perform an image processing task; and evaluating a performance of the trained instance of the child CNN on the task to determine a performance metric for the trained instance of the child CNN; and using the performance metrics for the trained instances of the child CNN to adjust current values of the controller parameters of the controller neural network.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: December 31, 2019
    Assignee: Google LLC
    Inventors: Vijay Vasudevan, Barret Zoph, Jonathon Shlens, Quoc V. Le
  • Publication number: 20190394262
    Abstract: A method is provided of using a set of servers to provide deferential services that have a pre-negotiated time for notice to release the servers. The method includes defining a virtual checkpoint frame interval that is constrained to a duration of up to half of the pre-negotiated time for notice to release the servers. The method includes, responsive to an end of the interval, (i) writing, to a shared state database, a state of processing of the packets and transactions occurring during the interval that are processed by a current one of the servers, and (ii) releasing the packets and transactions occurring during the interval. The method includes copying the packets and transactions occurring during the interval, and the state, from the current server to another server for subsequent processing, responsive to an indication of an instance loss on the current server.
    Type: Application
    Filed: September 5, 2019
    Publication date: December 26, 2019
    Inventors: Seraphin B. Calo, Douglas M. Freimuth, Franck V. Le, Maroun Touma, Dinesh C. Verma
  • Publication number: 20190392294
    Abstract: A method for determining a placement for machine learning model operations across multiple hardware devices includes receiving data specifying machine learning operations, and determining a placement that assigns each of the operations specified by the data to a respective device from the multiple hardware devices.
    Type: Application
    Filed: August 28, 2019
    Publication date: December 26, 2019
    Inventors: Benoit Steiner, Anna Darling Goldie, Jeffrey Adgate Dean, Hieu Hy Pham, Azalia Mirhoseini, Quoc V. Le
  • Patent number: 10503837
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for translating terms using numeric representations. One of the methods includes obtaining data that associates each term in a vocabulary of terms in a first language with a respective high-dimensional representation of the term; obtaining data that associates each term in a vocabulary of terms in a second language with a respective high-dimensional representation of the term; receiving a first language term; and determining a translation into the second language of the first language term from the high-dimensional representation of the first language term and the high-dimensional representations of terms in the vocabulary of terms in the second language.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: December 10, 2019
    Assignee: Google LLC
    Inventors: Ilya Sutskever, Tomas Mikolov, Jeffrey Adgate Dean, Quoc V. Le
  • Publication number: 20190354895
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for learning a data augmentation policy for training a machine learning model. In one aspect, a method includes: receiving training data for training a machine learning model to perform a particular machine learning task; determining multiple data augmentation policies, comprising, at each of multiple time steps: generating a current data augmentation policy based on quality measures of data augmentation policies generated at previous time steps; training a machine learning model on the training data using the current data augmentation policy; and determining a quality measure of the current data augmentation policy using the machine learning model after it has been trained using the current data augmentation policy; and selecting a final data augmentation policy based on the quality measures of the determined data augmentation policies.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 21, 2019
    Inventors: Vijay Vasudevan, Barret Zoph, Ekin Dogus Cubuk, Quoc V. Le