Patents by Inventor Bryan Andrew Seybold

Bryan Andrew Seybold has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11763466
    Abstract: A system comprising an encoder neural network, a scene structure decoder neural network, and a motion decoder neural network. The encoder neural network is configured to: receive a first image and a second image; and process the first image and the second image to generate an encoded representation of the first image and the second image. The scene structure decoder neural network is configured to process the encoded representation to generate a structure output characterizing a structure of a scene depicted in the first image. The motion decoder neural network configured to process the encoded representation to generate a motion output characterizing motion between the first image and the second image.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: September 19, 2023
    Assignee: Google LLC
    Inventors: Cordelia Luise Schmid, Sudheendra Vijayanarasimhan, Susanna Maria Ricco, Bryan Andrew Seybold, Rahul Sukthankar, Aikaterini Fragkiadaki
  • Patent number: 11669977
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an optical flow object localization system and a novel object localization system. In a first aspect, the optical flow object localization system is trained to process an optical flow image to generate object localization data defining locations of objects depicted in a video frame corresponding to the optical flow image. In a second aspect, a novel object localization system is trained to process a video frame to generate object localization data defining locations of novel objects depicted in the video frame.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: June 6, 2023
    Assignee: Google LLC
    Inventors: Susanna Maria Ricco, Bryan Andrew Seybold
  • Publication number: 20220383652
    Abstract: A computing system comprising one or more computing devices can obtain one or more images of an animal. The computing system can determine, using at least one of one or more machine-learned models, a plurality of joint positions associated with the animal based on the one or more images. The computing system can determine a body model for the animal. The computing system can estimate a body pose for the animal based on the one or more images, the plurality of joint positions, and the determined body model.
    Type: Application
    Filed: November 4, 2020
    Publication date: December 1, 2022
    Inventors: Bryan Andrew Seybold, Shan Yang, Bo Hu, Kevin Patrick Murphy, David Alexander Ross
  • Publication number: 20210217197
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an optical flow object localization system and a novel object localization system. In a first aspect, the optical flow object localization system is trained to process an optical flow image to generate object localization data defining locations of objects depicted in a video frame corresponding to the optical flow image. In a second aspect, a novel object localization system is trained to process a video frame to generate object localization data defining locations of novel objects depicted in the video frame.
    Type: Application
    Filed: March 26, 2021
    Publication date: July 15, 2021
    Inventors: Susanna Maria Ricco, Bryan Andrew Seybold
  • Patent number: 10991122
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an optical flow object localization system and a novel object localization system. In a first aspect, the optical flow object localization system is trained to process an optical flow image to generate object localization data defining locations of objects depicted in a video frame corresponding to the optical flow image. In a second aspect, a novel object localization system is trained to process a video frame to generate object localization data defining locations of novel objects depicted in the video frame.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: April 27, 2021
    Assignee: Google LLC
    Inventors: Susanna Maria Ricco, Bryan Andrew Seybold
  • Publication number: 20210118153
    Abstract: A system comprising an encoder neural network, a scene structure decoder neural network, and a motion decoder neural network. The encoder neural network is configured to: receive a first image and a second image; and process the first image and the second image to generate an encoded representation of the first image and the second image. The scene structure decoder neural network is configured to process the encoded representation to generate a structure output characterizing a structure of a scene depicted in the first image. The motion decoder neural network configured to process the encoded representation to generate a motion output characterizing motion between the first image and the second image.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 22, 2021
    Inventors: Cordelia Luise Schmid, Sudheendra Vijayanarasimhan, Susanna Maria Ricco, Bryan Andrew Seybold, Rahul Sukthankar, Aikaterini Fragkiadaki
  • Patent number: 10878583
    Abstract: A system comprising an encoder neural network, a scene structure decoder neural network, and a motion decoder neural network. The encoder neural network is configured to: receive a first image and a second image; and process the first image and the second image to generate an encoded representation of the first image and the second image. The scene structure decoder neural network is configured to process the encoded representation to generate a structure output characterizing a structure of a scene depicted in the first image. The motion decoder neural network configured to process the encoded representation to generate a motion output characterizing motion between the first image and the second image.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: December 29, 2020
    Assignee: Google LLC
    Inventors: Cordelia Luise Schmid, Sudheendra Vijayanarasimhan, Susanna Maria Ricco, Bryan Andrew Seybold, Rahul Sukthankar, Aikaterini Fragkiadaki
  • Publication number: 20200349722
    Abstract: A system comprising an encoder neural network, a scene structure decoder neural network, and a motion decoder neural network. The encoder neural network is configured to: receive a first image and a second image; and process the first image and the second image to generate an encoded representation of the first image and the second image. The scene structure decoder neural network is configured to process the encoded representation to generate a structure output characterizing a structure of a scene depicted in the first image. The motion decoder neural network configured to process the encoded representation to generate a motion output characterizing motion between the first image and the second image.
    Type: Application
    Filed: December 1, 2017
    Publication date: November 5, 2020
    Inventors: Cordelia Luise Schmid, Sudheendra Vijayanarasimhan, Susanna Maria Ricco, Bryan Andrew Seybold, Rahul Sukthankar, Aikaterini Fragkiadaki
  • Publication number: 20200151905
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an optical flow object localization system and a novel object localization system. In a first aspect, the optical flow object localization system is trained to process an optical flow image to generate object localization data defining locations of objects depicted in a video frame corresponding to the optical flow image. In a second aspect, a novel object localization system is trained to process a video frame to generate object localization data defining locations of novel objects depicted in the video frame.
    Type: Application
    Filed: January 31, 2019
    Publication date: May 14, 2020
    Inventors: Susanna Maria Ricco, Bryan Andrew Seybold
  • Patent number: 10566009
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for audio classifiers. In one aspect, a method includes obtaining a plurality of video frames from a plurality of videos, wherein each of the plurality of video frames is associated with one or more image labels of a plurality of image labels determined based on image recognition; obtaining a plurality of audio segments corresponding to the plurality of video frames, wherein each audio segment has a specified duration relative to the corresponding video frame; and generating an audio classifier trained using the plurality of audio segment and the associated image labels as input, wherein the audio classifier is trained such that the one or more groups of audio segments are determined to be associated with respective one or more audio labels.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: February 18, 2020
    Assignee: Google LLC
    Inventors: Sourish Chaudhuri, Achal D. Dave, Bryan Andrew Seybold
  • Patent number: 10381022
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for audio classifiers. In one aspect, a method includes obtaining a plurality of video frames from a plurality of videos, wherein each of the plurality of video frames is associated with one or more image labels of a plurality of image labels determined based on image recognition; obtaining a plurality of audio segments corresponding to the plurality of video frames, wherein each audio segment has a specified duration relative to the corresponding video frame; and generating an audio classifier trained using the plurality of audio segment and the associated image labels as input, wherein the audio classifier is trained such that the one or more groups of audio segments are determined to be associated with respective one or more audio labels.
    Type: Grant
    Filed: February 11, 2016
    Date of Patent: August 13, 2019
    Assignee: Google LLC
    Inventors: Sourish Chaudhuri, Achal D. Dave, Bryan Andrew Seybold
  • Patent number: 10311339
    Abstract: A temporal difference model can be trained to receive at least a first state representation and a second state representation that respectively describe a state of an object at two different times and, in response, output a temporal difference representation that encodes changes in the object between the two different times. To train the model, the temporal difference model can be combined with a prediction model that, given the temporal difference representation and the first state representation, seeks to predict or otherwise reconstruct the second state representation. The temporal difference model can be trained on a loss value that represents a difference between the second state representation and the prediction of the second state representation. In such fashion, unlabeled data can be used to train the temporal difference model to provide a temporal difference representation. The present disclosure further provides example uses for such temporal difference models once trained.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: June 4, 2019
    Assignee: Google LLC
    Inventor: Bryan Andrew Seybold
  • Publication number: 20180232604
    Abstract: A temporal difference model can be trained to receive at least a first state representation and a second state representation that respectively describe a state of an object at two different times and, in response, output a temporal difference representation that encodes changes in the object between the two different times. To train the model, the temporal difference model can be combined with a prediction model that, given the temporal difference representation and the first state representation, seeks to predict or otherwise reconstruct the second state representation. The temporal difference model can be trained on a loss value that represents a difference between the second state representation and the prediction of the second state representation. In such fashion, unlabeled data can be used to train the temporal difference model to provide a temporal difference representation. The present disclosure further provides example uses for such temporal difference models once trained.
    Type: Application
    Filed: February 14, 2017
    Publication date: August 16, 2018
    Inventor: Bryan Andrew Seybold