Patents by Inventor Jason E. Holt

Jason E. Holt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10552738
    Abstract: The present disclosure provides systems and methods that enable adaptive training of a channel coding model including an encoder model, a channel model positioned structurally after the encoder model, and a decoder model positioned structurally after the channel model. The channel model can have been trained to emulate a communication channel, for example, by training the channel model on example data that has been transmitted via the communication channel. The channel coding model can be trained on a loss function that describes a difference between input data input into the encoder model and output data received from the decoder model. In particular, such a loss function can be backpropagated through the decoder model while modifying the decoder model, backpropagated through the channel model while the channel model is held constant, and then backpropagated through the encoder model while modifying the encoder model.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: February 4, 2020
    Assignee: Google LLC
    Inventors: Jason E. Holt, Marcello Herreshoff
  • Patent number: 10482379
    Abstract: The present disclosure provides systems and methods that enable training of an encoder model based on a decoder model that performs an inverse transformation relative to the encoder model. In one example, an encoder model can receive a first set of inputs and output a first set of outputs. The encoder model can be a neural network. The decoder model can receive the first set of outputs and output a second set of outputs. A loss function can describe a difference between the first set of inputs and the second set of outputs. According to an aspect of the present disclosure, the loss function can be sequentially backpropagated through the decoder model without modifying the decoder model and then through the encoder model while modifying the encoder model, thereby training the encoder model. Thus, an encoder model can be trained to have enforced consistency relative to the inverse decoder model.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: November 19, 2019
    Assignee: Google LLC
    Inventors: Jason E. Holt, Marcello Mathias Herreshoff
  • Publication number: 20180174050
    Abstract: The present disclosure provides systems and methods that enable adaptive training of a channel coding model including an encoder model, a channel model positioned structurally after the encoder model, and a decoder model positioned structurally after the channel model. The channel model can have been trained to emulate a communication channel, for example, by training the channel model on example data that has been transmitted via the communication channel. The channel coding model can be trained on a loss function that describes a difference between input data input into the encoder model and output data received from the decoder model. In particular, such a loss function can be backpropagated through the decoder model while modifying the decoder model, backpropagated through the channel model while the channel model is held constant, and then backpropagated through the encoder model while modifying the encoder model.
    Type: Application
    Filed: December 15, 2016
    Publication date: June 21, 2018
    Inventors: Jason E. Holt, Marcello Herreshoff
  • Publication number: 20180032871
    Abstract: The present disclosure provides systems and methods that enable training of an encoder model based on a decoder model that performs an inverse transformation relative to the encoder model. In one example, an encoder model can receive a first set of inputs and output a first set of outputs. The encoder model can be a neural network. The decoder model can receive the first set of outputs and output a second set of outputs. A loss function can describe a difference between the first set of inputs and the second set of outputs. According to an aspect of the present disclosure, the loss function can be sequentially backpropagated through the decoder model without modifying the decoder model and then through the encoder model while modifying the encoder model, thereby training the encoder model. Thus, an encoder model can be trained to have enforced consistency relative to the inverse decoder model.
    Type: Application
    Filed: July 29, 2016
    Publication date: February 1, 2018
    Inventors: Jason E. Holt, Marcello Herreshoff