Patents by Inventor Andrew W. Senior
Andrew W. Senior has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250232841Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for designing a protein by jointly generating an amino acid sequence and a structure of the protein. In one aspect, a method comprises: generating data defining the amino acid sequence and the structure of the protein using a protein design neural network, comprising, for a plurality of positions in the amino acid sequence: receiving the current representation of the protein as of the current position: processing the current representation of the protein using the protein design neural network to generate design data for the current position that comprises: (i) data identifying an amino acid at the current position, and (ii) a set of structure parameters for the current position; and updating the current representation of the protein using the design data for the current position.Type: ApplicationFiled: November 21, 2022Publication date: July 17, 2025Inventors: Simon Kohl, John Jumper, Andrew W. Senior, Vinicius Zambaldi, Rosalia Galiazzi Schneider, Russell James Bates, Gabriella Hayley Stanton, Robert David Fergus, David La, David William Saxton, Fabian Bernd Fuchs
-
Patent number: 12362036Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a predicted structure of a protein that is specified by an amino acid sequence. In one aspect, a method comprises: obtaining an initial embedding and initial values of structure parameters for each amino acid in the amino acid sequence, wherein the structure parameters for each amino acid comprise location parameters that specify a predicted three-dimensional spatial location of the amino acid in the structure of the protein; and processing a network input comprising the initial embedding and the initial values of the structure parameters for each amino acid in the amino acid sequence using a folding neural network to generate a network output comprising final values of the structure parameters for each amino acid in the amino acid sequence.Type: GrantFiled: December 2, 2019Date of Patent: July 15, 2025Assignee: DeepMind Technologies LimitedInventors: John Jumper, Andrew W. Senior, Richard Andrew Evans, Stephan Gouws, Alexander Bridgland
-
Patent number: 12243515Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using neural networks. A feature vector that models audio characteristics of a portion of an utterance is received. Data indicative of latent variables of multivariate factor analysis is received. The feature vector and the data indicative of the latent variables is provided as input to a neural network. A candidate transcription for the utterance is determined based on at least an output of the neural network.Type: GrantFiled: March 2, 2023Date of Patent: March 4, 2025Assignee: Google LLCInventors: Andrew W. Senior, Ignacio L. Moreno
-
Publication number: 20250068953Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting errors in a computation performed by a quantum computer. In one aspect, a method comprises obtaining error correction data for each of a plurality of time steps during the computation; initializing a decoder state; and for each of the plurality of time steps: generating an intermediate representation; and processing a time step input through a Transformer neural network to update the decoder state for the time step. The method comprises generating a prediction of whether an error occurred in the computation from the decoder state for the last time step of the plurality of time steps.Type: ApplicationFiled: August 23, 2023Publication date: February 27, 2025Inventors: Andrew W. Senior, Francisco Javier Hernandez Heras, Thomas Bastian Edlich, Alexander James Davies, Johannes Karl Richard Bausch, Yuezhen Niu
-
Publication number: 20250068955Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting errors in a computation performed by a quantum computer. In one aspect, a method comprises obtaining error correction data for each of a plurality of time steps during the computation; and processing a respective input for each of a plurality of updating time steps using one or more machine learning decoder models to generate a prediction of whether an error occurred in the computation, wherein each updating time step corresponds to one or more of the time steps and wherein the respective input for each of the plurality of updating time steps is generated from the error correction data for the corresponding one or more time steps.Type: ApplicationFiled: August 23, 2023Publication date: February 27, 2025Inventors: Andrew W. Senior, Francisco Javier Hernandez Heras, Thomas Bastian Edlich, Alexander James Davies, Johannes Karl Richard Bausch, Kevin Satzinger, Michael Gabriel Newman
-
Publication number: 20250068954Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting errors in a computation performed by a quantum computer. In one aspect, a method comprises obtaining error correction data for each of a plurality of time steps during the computation; initializing a decoder state; and for each of a plurality of updating time steps, wherein each updating time step corresponds to one or more of the time steps: generating an intermediate representation; and processing a time step input through a Transformer neural network to update the decoder state for the updating time step. The method comprises generating a prediction of whether an error occurred in the computation from the decoder state for the last updating time step of the plurality of updating time steps.Type: ApplicationFiled: August 23, 2023Publication date: February 27, 2025Inventors: Andrew W. Senior, Francisco Javier Hernandez Heras, Thomas Bastian Edlich, Alexander James Davies, Johannes Karl Richard Bausch, Yuezhen Niu
-
Publication number: 20250069705Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing protein structure prediction and protein domain segmentation. In one aspect, a method comprises generating a plurality of predicted structures of a protein, wherein generating a predicted structure of the protein comprises: updating initial values of a plurality of structure parameters of the protein, comprising, at each of a plurality of update iterations: determining a gradient of a quality score for the current values of the structure parameters with respect to the current values of the structure parameters; and updating the current values of the structure parameters using the gradient.Type: ApplicationFiled: November 8, 2024Publication date: February 27, 2025Inventors: Andrew W. Senior, James Kirkpatrick, Laurent Sifre, RIchard Andrew Evans, Hugo Penedones, Chongli Qin, Ruoxi Sun, Karen Simonyan, John Jumper
-
Publication number: 20240412809Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a predicted structure of a protein that is specified by an amino acid sequence. In one aspect, a method comprises: obtaining a multiple sequence alignment for the protein; determining, from the multiple sequence alignment and for each pair of amino acids in the amino acid sequence of the protein, a respective initial embedding of the pair of amino acids; processing the initial embeddings of the pairs of amino acids using a pair embedding neural network comprising a plurality of self-attention neural network layers to generate a final embedding of each pair of amino acids; and determining the predicted structure of the protein based on the final embedding of each pair of amino acids.Type: ApplicationFiled: August 23, 2024Publication date: December 12, 2024Inventors: John Jumper, Andrew W. Senior, Richard Andrew Evans, Russell James Bates, Mikhail Figurnov, Alexander Pritzel, Timothy Frederick Goldie Green
-
Patent number: 12100477Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining a predicted structure of a protein that is specified by an amino acid sequence. In one aspect, a method comprises: obtaining a multiple sequence alignment for the protein; determining, from the multiple sequence alignment and for each pair of amino acids in the amino acid sequence of the protein, a respective initial embedding of the pair of amino acids; processing the initial embeddings of the pairs of amino acids using a pair embedding neural network comprising a plurality of self-attention neural network layers to generate a final embedding of each pair of amino acids; and determining the predicted structure of the protein based on the final embedding of each pair of amino acids.Type: GrantFiled: December 1, 2020Date of Patent: September 24, 2024Assignee: DeepMind Technologies LimitedInventors: John Jumper, Andrew W. Senior, Richard Andrew Evans, Russell James Bates, Mikhail Figurnov, Alexander Pritzel, Timothy Frederick Goldie Green
-
Patent number: 12073823Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining, by a first sequence-training speech model, a first batch of training frames that represent speech features of first training utterances; obtaining, by the first sequence-training speech model, one or more first neural network parameters; determining, by the first sequence-training speech model, one or more optimized first neural network parameters based on (i) the first batch of training frames and (ii) the one or more first neural network parameters; obtaining, by a second sequence-training speech model, a second batch of training frames that represent speech features of second training utterances; obtaining one or more second neural network parameters; and determining, by the second sequence-training speech model, one or more optimized second neural network parameters based on (i) the second batch of training frames and (ii) the one or more second neural network parameters.Type: GrantFiled: November 10, 2023Date of Patent: August 27, 2024Assignee: Google LLCInventors: Georg Heigold, Erik Mcdermott, Vincent O. Vanhoucke, Andrew W. Senior, Michiel A. U. Bacchiani
-
Patent number: 11996088Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for acoustic modeling of audio data. One method includes receiving audio data representing a portion of an utterance, providing the audio data to a trained recurrent neural network that has been trained to indicate the occurrence of a phone at any of multiple time frames within a maximum delay of receiving audio data corresponding to the phone, receiving, within the predetermined maximum delay of providing the audio data to the trained recurrent neural network, output of the trained neural network indicating a phone corresponding to the provided audio data using output of the trained neural network to determine a transcription for the utterance, and providing the transcription for the utterance.Type: GrantFiled: July 1, 2020Date of Patent: May 28, 2024Assignee: Google LLCInventors: Andrew W. Senior, Hasim Sak, Kanury Kanishka Rao
-
Publication number: 20240120022Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing protein design. In one aspect, a method comprises: processing an input characterizing a target protein structure of a target protein using an embedding neural network having a plurality of embedding neural network parameters to generate an embedding of the target protein structure of the target protein; determining a predicted amino acid sequence of the target protein based on the embedding of the target protein structure, comprising: conditioning a generative neural network having a plurality of generative neural network parameters on the embedding of the target protein structure; and generating, by the generative neural network conditioned on the embedding of the target protein structure, a representation of the predicted amino acid sequence of the target protein.Type: ApplicationFiled: January 27, 2022Publication date: April 11, 2024Inventors: Andrew W. Senior, Simon Kohl, Jason Yim, Russell James Bates, Catalin-Dumitru Ionescu, Charlie Thomas Curtis Nash, Ali Razavi-Nematollahi, Alexander Pritzel, John Jumper
-
Publication number: 20240087559Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining, by a first sequence-training speech model, a first batch of training frames that represent speech features of first training utterances; obtaining, by the first sequence-training speech model, one or more first neural network parameters; determining, by the first sequence-training speech model, one or more optimized first neural network parameters based on (i) the first batch of training frames and (ii) the one or more first neural network parameters; obtaining, by a second sequence-training speech model, a second batch of training frames that represent speech features of second training utterances; obtaining one or more second neural network parameters; and determining, by the second sequence-training speech model, one or more optimized second neural network parameters based on (i) the second batch of training frames and (ii) the one or more second neural network parameters.Type: ApplicationFiled: November 10, 2023Publication date: March 14, 2024Applicant: Google LLCInventors: Georg Heigold, Erik Mcdermott, Vincent O. Vanhoucke, Andrew W. Senior, Michiel A. U. Bacchiani
-
Patent number: 11854534Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining, by a first sequence-training speech model, a first batch of training frames that represent speech features of first training utterances; obtaining, by the first sequence-training speech model, one or more first neural network parameters; determining, by the first sequence-training speech model, one or more optimized first neural network parameters based on (i) the first batch of training frames and (ii) the one or more first neural network parameters; obtaining, by a second sequence-training speech model, a second batch of training frames that represent speech features of second training utterances; obtaining one or more second neural network parameters; and determining, by the second sequence-training speech model, one or more optimized second neural network parameters based on (i) the second batch of training frames and (ii) the one or more second neural network parameters.Type: GrantFiled: December 20, 2022Date of Patent: December 26, 2023Assignee: Google LLCInventors: Georg Heigold, Erik Mcdermott, Vincent O. Vanhoucke, Andrew W. Senior, Michiel A. U. Bacchiani
-
Patent number: 11769493Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training acoustic models and using the trained acoustic models. A connectionist temporal classification (CTC) acoustic model is accessed, the CTC acoustic model having been trained using a context-dependent state inventory generated from approximate phonetic alignments determined by another CTC acoustic model trained without fixed alignment targets. Audio data for a portion of an utterance is received. Input data corresponding to the received audio data is provided to the accessed CTC acoustic model. Data indicating a transcription for the utterance is generated based on output that the accessed CTC acoustic model produced in response to the input data. The data indicating the transcription is provided as output of an automated speech recognition service.Type: GrantFiled: May 3, 2022Date of Patent: September 26, 2023Assignee: Google LLCInventors: Kanury Kanishka Rao, Andrew W. Senior, Hasim Sak
-
Patent number: 11721327Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating representation of acoustic sequences. One of the methods includes: receiving an acoustic sequence, the acoustic sequence comprising a respective acoustic feature representation at each of a plurality of time steps; processing the acoustic feature representation at an initial time step using an acoustic modeling neural network; for each subsequent time step of the plurality of time steps: receiving an output generated by the acoustic modeling neural network for a preceding time step, generating a modified input from the output generated by the acoustic modeling neural network for the preceding time step and the acoustic representation for the time step, and processing the modified input using the acoustic modeling neural network to generate an output for the time step; and generating a phoneme representation for the utterance from the outputs for each of the time steps.Type: GrantFiled: January 8, 2021Date of Patent: August 8, 2023Assignee: Google LLCInventors: Hasim Sak, Andrew W. Senior
-
Patent number: 11715486Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying the language of a spoken utterance. One of the methods includes receiving input features of an utterance; and processing the input features using an acoustic model that comprises one or more convolutional neural network (CNN) layers, one or more long short-term memory network (LSTM) layers, and one or more fully connected neural network layers to generate a transcription for the utterance.Type: GrantFiled: December 31, 2019Date of Patent: August 1, 2023Assignee: Google LLCInventors: Tara N. Sainath, Andrew W. Senior, Oriol Vinyals, Hasim Sak
-
Publication number: 20230206909Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using neural networks. A feature vector that models audio characteristics of a portion of an utterance is received. Data indicative of latent variables of multivariate factor analysis is received. The feature vector and the data indicative of the latent variables is provided as input to a neural network. A candidate transcription for the utterance is determined based on at least an output of the neural network.Type: ApplicationFiled: March 2, 2023Publication date: June 29, 2023Applicant: Google LLCInventors: Andrew W. Senior, Ignacio L. Moreno
-
Patent number: 11620991Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using neural networks. A feature vector that models audio characteristics of a portion of an utterance is received. Data indicative of latent variables of multivariate factor analysis is received. The feature vector and the data indicative of the latent variables is provided as input to a neural network. A candidate transcription for the utterance is determined based on at least an output of the neural network.Type: GrantFiled: January 21, 2021Date of Patent: April 4, 2023Assignee: Google LLCInventors: Andrew W. Senior, Ignacio L. Moreno
-
Patent number: 11557277Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining, by a first sequence-training speech model, a first batch of training frames that represent speech features of first training utterances; obtaining, by the first sequence-training speech model, one or more first neural network parameters; determining, by the first sequence-training speech model, one or more optimized first neural network parameters based on (i) the first batch of training frames and (ii) the one or more first neural network parameters; obtaining, by a second sequence-training speech model, a second batch of training frames that represent speech features of second training utterances; obtaining one or more second neural network parameters; and determining, by the second sequence-training speech model, one or more optimized second neural network parameters based on (i) the second batch of training frames and (ii) the one or more second neural network parameters.Type: GrantFiled: December 15, 2021Date of Patent: January 17, 2023Assignee: Google LLCInventors: Georg Heigold, Erik McDermott, Vincent O. VanHoucke, Andrew W. Senior, Michiel A. U. Bacchiani