Patents by Inventor Noam M. Shazeer

Noam M. Shazeer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144006
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
    Type: Application
    Filed: January 8, 2024
    Publication date: May 2, 2024
    Inventors: Noam M. Shazeer, Aidan Nicholas Gomez, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Llion Owen Jones, Niki J. Parmar, Illia Polosukhin, Ashish Teku Vaswani
  • Patent number: 11954594
    Abstract: This document generally describes a neural network training system, including one or more computers, that trains a recurrent neural network (RNN) to receive an input, e.g., an input sequence, and to generate a sequence of outputs from the input sequence. In some implementations, training can include, for each position after an initial position in a training target sequence, selecting a preceding output of the RNN to provide as input to the RNN at the position, including determining whether to select as the preceding output (i) a true output in a preceding position in the output order or (ii) a value derived from an output of the RNN for the preceding position in an output order generated in accordance with current values of the parameters of the recurrent neural network.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam M. Shazeer
  • Patent number: 11893483
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: February 6, 2024
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Aidan Nicholas Gomez, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Llion Owen Jones, Niki J. Parmar, Illia Polosukhin, Ashish Teku Vaswani
  • Patent number: 11886998
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. One of the methods includes, at each of a plurality of generation time steps: generating a combined sequence for the generation time step that includes the input sequence followed by the output tokens that have already been generated as of the generation time step; processing the combined sequence using a self-attention decoder neural network to generate a time step output that defines a score distribution over a set of possible output tokens; and selecting, using the time step output, an output token from the set of possible output tokens as the next output token in the output sequence.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: January 30, 2024
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Lukasz Mieczyslaw Kaiser, Etienne Pot, Mohammad Saleh, Ben Goodrich, Peter J. Liu, Ryan Sepassi
  • Publication number: 20230419079
    Abstract: A system includes a neural network that includes a Mixture of Experts (MoE) subnetwork between a first neural network layer and a second neural network layer. The MoE subnetwork includes multiple expert neural networks. Each expert neural network is configured to process a first layer output generated by the first neural network layer to generate a respective expert output. The MoE subnetwork further includes a gating subsystem that selects, based on the first layer output, one or more of the expert neural networks and determine a respective weight for each selected expert neural network, provides the first layer output as input to each of the selected expert neural networks, combines the expert outputs generated by the selected expert neural networks in accordance with the weights for the selected expert neural networks to generate an MoE output, and provides the MoE output as input to the second neural network layer.
    Type: Application
    Filed: September 8, 2023
    Publication date: December 28, 2023
    Inventors: Noam M. Shazeer, Azalia Mirhoseini, Krzysztof Stanislaw Maziarz
  • Patent number: 11816884
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output image. In one aspect, one of the methods includes generating the output image intensity value by intensity value according to a generation order of pixel—color channel pairs from the output image, comprising, for each particular generation order position in the generation order: generating a current output image representation of a current output image, processing the current output image representation using a decoder neural network to generate a probability distribution over possible intensity values for the pixel—color channel pair at the particular generation order position, wherein the decoder neural network includes one or more local masked self-attention sub-layers; and selecting an intensity value for the pixel—color channel pair at the particular generation order position using the probability distribution.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: November 14, 2023
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Niki J. Parmar, Ashish Teku Vaswani
  • Patent number: 11816114
    Abstract: The present disclosure includes systems and techniques relating to ranking search results of a search query. In general, the subject matter described in this specification can be embodied in a computer-implemented method that includes determining a measure of relevance for a document result within a context of a search query for which the document result is returned, the determining being based on a first number in relation to a second number, the first number corresponding to longer views of the document result, and the second number corresponding to at least shorter views of the document result; and outputting the measure of relevance to a ranking engine for ranking of search results, including the document result, for a new search corresponding to the search query. The subject matter described in this specification can also be embodied in various corresponding computer program products, apparatus and systems.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: November 14, 2023
    Assignee: Google LLC
    Inventors: Hyung-Jin Kim, Simon Tong, Noam M. Shazeer, Michelangelo Diligenti
  • Publication number: 20230351190
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model using a deterministic data pipeline. One of the methods may include receiving a first request to generate a deterministic training dataset: transforming raw training examples obtained from the raw data source into pre-processed training examples; assigning a unique index to each pre-processed training example; and caching the pre-processed training examples into the cache directory specified in the received first request; receiving a second request to use the deterministic training dataset to train a machine learning model, the second request specifying a start index; and in response to receiving the second request: reading, from the cache directory, the pre-processed training examples that have indices beginning from the start index; and providing the read training examples in an order of the assigned indices for use in training the machine learning model.
    Type: Application
    Filed: July 7, 2023
    Publication date: November 2, 2023
    Inventors: Gaurav Mishra, Adam Joseph Roberts, Noam M. Shazeer, JR., Maarten Paul Bosma
  • Publication number: 20230351188
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input to generate a network output. In one aspect, one of the systems includes a neural network configured to perform the machine learning task, the neural network including one or more switch layers.
    Type: Application
    Filed: July 7, 2023
    Publication date: November 2, 2023
    Inventors: William Bradley Fedus, Barret Zoph, Noam M. Shazeer
  • Patent number: 11790214
    Abstract: A system includes a neural network that includes a Mixture of Experts (MoE) subnetwork between a first neural network layer and a second neural network layer. The MoE subnetwork includes multiple expert neural networks. Each expert neural network is configured to process a first layer output generated by the first neural network layer to generate a respective expert output. The MoE subnetwork further includes a gating subsystem that selects, based on the first layer output, one or more of the expert neural networks and determine a respective weight for each selected expert neural network, provides the first layer output as input to each of the selected expert neural networks, combines the expert outputs generated by the selected expert neural networks in accordance with the weights for the selected expert neural networks to generate an MoE output, and provides the MoE output as input to the second neural network layer.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Azalia Mirhoseini, Krzysztof Stanislaw Maziarz
  • Publication number: 20230316082
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model using a deterministic data pipeline. One of the methods may include receiving a first request to generate a deterministic training dataset: transforming raw training examples obtained from the raw data source into pre-processed training examples; assigning a unique index to each pre-processed training example; and caching the pre-processed training examples into the cache directory specified in the received first request; receiving a second request to use the deterministic training dataset to train a machine learning model, the second request specifying a start index; and in response to receiving the second request: reading, from the cache directory, the pre-processed training examples that have indices beginning from the start index; and providing the read training examples in an order of the assigned indices for use in training the machine learning model.
    Type: Application
    Filed: April 3, 2023
    Publication date: October 5, 2023
    Inventors: Gaurav Mishra, Adam Joseph Roberts, Noam M. Shazeer, JR., Maarten Paul Bosma
  • Publication number: 20230222318
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing machine learning task on a network input to generate a network output. In one aspect, one of the systems includes an attention neural network configured to perform the machine learning task, the attention neural network including one or more attention layers, each attention layer comprising an attention sub-layer and a feed-forward sub-layer. Some or all of the attention layers have a feed-forward sub-layer that applies conditional computation to the inputs to the sub-layer.
    Type: Application
    Filed: June 30, 2021
    Publication date: July 13, 2023
    Inventors: Dmitry Lepikhin, Yanping Huang, Orhan Firat, Maxim Krikun, Dehao Chen, Noam M. Shazeer, HyoukJoong Lee, Yuanzhong Xu, Zhifeng Chen
  • Patent number: 11681954
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing parallel generation of output from an autoregressive sequence to sequence model. In one aspect, a blockwise parallel decoding method takes advantage of the fact that some architectures can score sequences in sublinear time. By generating predictions for multiple time steps at once then backing off to a longest prefix validated by the scoring model, the methods can substantially improve the speed of greedy decoding without compromising performance.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: June 20, 2023
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Jakob D. Uszkoreit, Mitchell Thomas Stern
  • Publication number: 20230153613
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. One of the methods includes, at each of a plurality of generation time steps: generating a combined sequence for the generation time step that includes the input sequence followed by the output tokens that have already been generated as of the generation time step; processing the combined sequence using a self-attention decoder neural network to generate a time step output that defines a score distribution over a set of possible output tokens; and selecting, using the time step output, an output token from the set of possible output tokens as the next output token in the output sequence.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Noam M. Shazeer, Lukasz Mieczyslaw Kaiser, Etienne Pot, Mohammad Saleh, Ben Goodrich, Peter J. Liu, Ryan Sepassi
  • Publication number: 20230074406
    Abstract: As part of a dialog session between a user and an automated assistant, implementations can receive a stream of audio data that captures a spoken utterance including an assistant query, determine, based on processing the stream of audio data, a set of assistant outputs that are each predicted to be responsive to the assistant query, process, using large language model (LLM) output(s), the assistant outputs and context of the dialog session to generate a set of modified assistant outputs, and cause given modified assistant output, from among the set of modified assistant outputs, to be provided for presentation to the user in response to the spoken utterance. In some implementations, the LLM output(s) can be generated in an offline manner for subsequent use in an online manner. In additional or alternative implementations, the LLM output(s) can be generated in an online manner when the spoken utterance is received.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 9, 2023
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Vikram Sridar, Daniel De Freitas Adiwardana, Noam M. Shazeer, Quoc Le
  • Publication number: 20230076971
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output image. In one aspect, one of the methods includes generating the output image intensity value by intensity value according to a generation order of pixel—color channel pairs from the output image, comprising, for each particular generation order position in the generation order: generating a current output image representation of a current output image, processing the current output image representation using a decoder neural network to generate a probability distribution over possible intensity values for the pixel—color channel pair at the particular generation order position, wherein the decoder neural network includes one or more local masked self-attention sub-layers; and selecting an intensity value for the pixel—color channel pair at the particular generation order position using the probability distribution.
    Type: Application
    Filed: July 18, 2022
    Publication date: March 9, 2023
    Inventors: Noam M. Shazeer, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Niki J. Parmar, Ashish Teku Vaswani
  • Publication number: 20230029590
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for evaluating candidate output sequences using language model neural networks. In particular, an auto-regressive language model neural network is used to generate a candidate output sequence. The same auto-regressive language model neural network is used to evaluate the candidate output sequence to determine rating scores for each of one or more criteria. The rating score(s) are then used to determine whether to provide the candidate output sequence.
    Type: Application
    Filed: July 28, 2022
    Publication date: February 2, 2023
    Inventors: Daniel De Freitas Adiwardana, Noam M. Shazeer
  • Patent number: 11556786
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. One of the methods includes, at each of a plurality of generation time steps: generating a combined sequence for the generation time step that includes the input sequence followed by the output tokens that have already been generated as of the generation time step; processing the combined sequence using a self-attention decoder neural network to generate a time step output that defines a score distribution over a set of possible output tokens; and selecting, using the time step output, an output token from the set of possible output tokens as the next output token in the output sequence.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Lukasz Mieczyslaw Kaiser, Etienne Pot, Mohammad Saleh, Ben David Goodrich, Peter J. Liu, Ryan Sepassi
  • Publication number: 20220383119
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing a machine learning task on a network input to generate a network output. One of the systems includes an attention neural network configured to perform the machine learning task. The attention neural network includes one or more attentions layers that each include a squared ReLU activation layer, a depth-wise convolution layer, or both.
    Type: Application
    Filed: May 27, 2022
    Publication date: December 1, 2022
    Inventors: David Richard So, Quoc V. Le, Jr., Hanxiao Liu, Wojciech Andrzej Manke, Zihang Dai, Noam M. Shazeer
  • Patent number: 11494561
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for training a machine learning model to perform multiple machine learning tasks from multiple machine learning domains. One system includes a machine learning model that includes multiple input modality neural networks corresponding to respective different modalities and being configured to map received data inputs of the corresponding modality to mapped data inputs from a unified representation space; an encoder neural network configured to process mapped data inputs from the unified representation space to generate respective encoder data outputs; a decoder neural network configured to process encoder data outputs to generate respective decoder data outputs from the unified representation space; and multiple output modality neural networks corresponding to respective different modalities and being configured to map decoder data outputs to data outputs of the corresponding modality.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Noam M. Shazeer, Aidan Nicholas Gomez, Lukasz Mieczyslaw Kaiser, Jakob D. Uszkoreit, Llion Owen Jones, Niki J. Parmar, Ashish Teku Vaswani