Patents by Inventor George Edward Dahl

George Edward Dahl has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11537949
    Abstract: A method for reducing idleness in a machine-learning training system can include performing operations by computing devices. A first set of training operations can access and prepare a plurality of training examples of a set of training data. A second set of training operations can train a machine-learned model based at least in part on the set of training data and can include one or more repeat iterations in which at least a portion of the second set of training operations is repeatedly performed such that the training example(s) are repeatedly used to train the machine-learned model. A rate of the repeat iteration(s) can be based at least in part on an echo factor that can be based at least in part on a comparison of a first computational time of the first set of training operations to a second computational time of the second set of training operations.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: December 27, 2022
    Assignee: GOOGLE LLC
    Inventors: Dami Choi, Alexandre Tachard Passos, Christopher James Shallue, George Edward Dahl
  • Publication number: 20200372407
    Abstract: A method for reducing idleness in a machine-learning training system can include performing operations by computing devices. A first set of training operations can access and prepare a plurality of training examples of a set of training data. A second set of training operations can train a machine-learned model based at least in part on the set of training data and can include one or more repeat iterations in which at least a portion of the second set of training operations is repeatedly performed such that the training example(s) are repeatedly used to train the machine-learned model. A rate of the repeat iteration(s) can be based at least in part on an echo factor that can be based at least in part on a comparison of a first computational time of the first set of training operations to a second computational time of the second set of training operations.
    Type: Application
    Filed: May 11, 2020
    Publication date: November 26, 2020
    Inventors: Dami Choi, Alexandre Tachard Passos, Christopher James Shallue, George Edward Dahl
  • Publication number: 20190295688
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing a biological sequence using a neural network. One of the methods includes obtaining data identifying a biological sequence; generating, from the obtained data, an encoding of the biological sequence; processing the encoding using a deep neural network, wherein the deep neural network is configured through training to process the encoding to generate a score distribution over a set of biological labels for the biological sequence; and classifying the biological sequence using the score distribution.
    Type: Application
    Filed: March 25, 2019
    Publication date: September 26, 2019
    Inventors: Mark Andrew DePristo, Akosua Pokua Busia, George Edward Dahl
  • Publication number: 20190286984
    Abstract: A method of determining a final architecture for a neural network (NN) for performing a particular NN task is described.
    Type: Application
    Filed: March 12, 2019
    Publication date: September 19, 2019
    Applicant: Google LLC
    Inventors: Vijay Vasudevan, Mohammad Norouzi, George Edward Dahl, Manoj Kumar Sivaraj
  • Patent number: 8972253
    Abstract: A method is disclosed herein that includes an act of causing a processor to receive a sample, wherein the sample is one of spoken utterance, an online handwriting sample, or a moving image sample. The method also comprises the act of causing the processor to decode the sample based at least in part upon an output of a combination of a deep structure and a context-dependent Hidden Markov Model (HMM), wherein the deep structure is configured to output a posterior probability of a context-dependent unit. The deep structure is a Deep Belief Network consisting of many layers of nonlinear units with connecting weights between layers trained by a pretraining step followed by a fine-tuning step.
    Type: Grant
    Filed: September 15, 2010
    Date of Patent: March 3, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Li Deng, Dong Yu, George Edward Dahl
  • Publication number: 20120065976
    Abstract: A method is disclosed herein that includes an act of causing a processor to receive a sample, wherein the sample is one of spoken utterance, an online handwriting sample, or a moving image sample. The method also comprises the act of causing the processor to decode the sample based at least in part upon an output of a combination of a deep structure and a context-dependent Hidden Markov Model (HMM), wherein the deep structure is configured to output a posterior probability of a context-dependent unit. The deep structure is a Deep Belief Network consisting of many layers of nonlinear units with connecting weights between layers trained by a pretraining step followed by a fine-tuning step.
    Type: Application
    Filed: September 15, 2010
    Publication date: March 15, 2012
    Applicant: Microsoft Corporation
    Inventors: Li Deng, Dong Yu, George Edward Dahl