Patents by Inventor Daniel Zorn

Daniel Zorn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11388424
    Abstract: A system implemented by one or more computers comprises a visual encoder component configured to receive as input data representing a sequence of image frames, in particular representing objects in a scene of the sequence, and to output a sequence of corresponding state codes, each state code comprising vectors, one for each of the objects. Each vector represents a respective position and velocity of its corresponding object. The system also comprises a dynamic predictor component configured to take as input a sequence of state codes, for example from the visual encoder, and predict a state code for a next unobserved frame. The system further comprises a state decoder component configured to convert the predicted state code, to a state, the state comprising a respective position and velocity vector for each object in the scene. This state may represent a predicted position and velocity vector for each of the objects.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: July 12, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Nicholas Watters, Razvan Pascanu, Peter William Battaglia, Daniel Zorn, Theophane Guillaume Weber
  • Publication number: 20210152835
    Abstract: A system implemented by one or more computers comprises a visual encoder component configured to receive as input data representing a sequence of image frames, in particular representing objects in a scene of the sequence, and to output a sequence of corresponding state codes, each state code comprising vectors, one for each of the objects. Each vector represents a respective position and velocity of its corresponding object. The system also comprises a dynamic predictor component configured to take as input a sequence of state codes, for example from the visual encoder, and predict a state code for a next unobserved frame. The system further comprises a state decoder component configured to convert the predicted state code, to a state, the state comprising a respective position and velocity vector for each object in the scene. This state may represent a predicted position and velocity vector for each of the objects.
    Type: Application
    Filed: December 29, 2020
    Publication date: May 20, 2021
    Inventors: Nicholas Watters, Razvan Pascanu, Peter William Battaglia, Daniel Zorn, Theophane Guillaume Weber
  • Patent number: 10887607
    Abstract: A system implemented by one or more computers comprises a visual encoder component configured to receive as input data representing a sequence of image frames, in particular representing objects in a scene of the sequence, and to output a sequence of corresponding state codes, each state code comprising vectors, one for each of the objects. Each vector represents a respective position and velocity of its corresponding object. The system also comprises a dynamic predictor component configured to take as input a sequence of state codes, for example from the visual encoder, and predict a state code for a next unobserved frame. The system further comprises a state decoder component configured to convert the predicted state code, to a state, the state comprising a respective position and velocity vector for each object in the scene. This state may represent a predicted position and velocity vector for each of the objects.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: January 5, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Nicholas Watters, Razvan Pascanu, Peter William Battaglia, Daniel Zorn, Theophane Guillaume Weber
  • Patent number: 10860928
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating data items. One of the systems is a neural network system comprising a memory storing a plurality of template data items; one or more processors configured to select a memory address based upon a received input data item, and retrieve a template data item from the memory based upon the selected memory address; an encoder neural network configured to process the received input data item and the retrieved template data item to generate a latent variable representation; and a decoder neural network configured to process the retrieved template data item and the latent variable representation to generate an output data item.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: December 8, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Andriy Mnih, Daniel Zorn, Danilo Jimenez Rezende, Jorg Bornschein
  • Publication number: 20200090043
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating data items. One of the systems is a neural network system comprising a memory storing a plurality of template data items; one or more processors configured to select a memory address based upon a received input data item, and retrieve a template data item from the memory based upon the selected memory address; an encoder neural network configured to process the received input data item and the retrieved template data item to generate a latent variable representation; and a decoder neural network configured to process the retrieved template data item and the latent variable representation to generate an output data item.
    Type: Application
    Filed: November 19, 2019
    Publication date: March 19, 2020
    Inventors: Andriy Mnih, Daniel Zorn, Danilo Jimenez Rezende, Jorg Bornschein
  • Publication number: 20200092565
    Abstract: A system implemented by one or more computers comprises a visual encoder component configured to receive as input data representing a sequence of image frames, in particular representing objects in a scene of the sequence, and to output a sequence of corresponding state codes, each state code comprising vectors, one for each of the objects. Each vector represents a respective position and velocity of its corresponding object. The system also comprises a dynamic predictor component configured to take as input a sequence of state codes, for example from the visual encoder, and predict a state code for a next unobserved frame. The system further comprises a state decoder component configured to convert the predicted state code, to a state, the state comprising a respective position and velocity vector for each object in the scene. This state may represent a predicted position and velocity vector for each of the objects.
    Type: Application
    Filed: November 18, 2019
    Publication date: March 19, 2020
    Inventors: Nicholas Watters, Razvan Pascanu, Peter William Battaglia, Daniel Zorn, Theophane Guillaume Weber