Patents by Inventor Christopher John Sweeney

Christopher John Sweeney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11669980
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating motion detection based on optical flow. One of the methods includes obtaining a first image of a scene in an environment taken by an agent at a first time point and a second image of the scene at a second later time point. A point cloud characterizing the scene in the environment is obtained. A predicted optical flow is determined between the first image and the second image. A respective initial flow prediction for the point that represents motion of the point between the two time points is determined. A respective ego motion flow estimate for the point that represents a motion of the point induced by ego motion of the agent is determined. A respective motion prediction that indicates whether the point was static or in motion between the two time points is determined.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: June 6, 2023
    Assignee: Waymo LLC
    Inventors: Daniel Rudolf Maurer, Alper Ayvaci, Nichola Abdo, Christopher John Sweeney, Robert William Anderson
  • Publication number: 20230035454
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an optical flow label from a lidar point cloud. One of the methods includes obtaining data specifying a training example, including a first image of a scene in an environment captured at a first time point and a second image of the scene in the environment captured at a second time point. For each of a plurality of lidar points, a respective second corresponding pixel in the second image is obtained and a respective velocity estimate for the lidar point at the second time point is obtained. A respective first corresponding pixel in the first image is determined using the velocity estimate for the lidar point. A proxy optical flow ground truth for the training example is generated based on an estimate of optical flow of the pixel between the first and second images.
    Type: Application
    Filed: July 23, 2021
    Publication date: February 2, 2023
    Inventors: Daniel Rudolf Maurer, Alper Ayvaci, Robert William Anderson, Rico Jonschkowski, Austin Charles Stone, Anelia Angelova, Nichola Abdo, Christopher John Sweeney
  • Publication number: 20230033989
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating motion detection based on optical flow. One of the methods includes obtaining a first image of a scene in an environment taken by an agent at a first time point and a second image of the scene at a second later time point. A point cloud characterizing the scene in the environment is obtained. A predicted optical flow is determined between the first image and the second image. A respective initial flow prediction for the point that represents motion of the point between the two time points is determined. A respective ego motion flow estimate for the point that represents a motion of the point induced by ego motion of the agent is determined. A respective motion prediction that indicates whether the point was static or in motion between the two time points is determined.
    Type: Application
    Filed: July 23, 2021
    Publication date: February 2, 2023
    Inventors: Daniel Rudolf Maurer, Alper Ayvaci, Nichola Abdo, Christopher John Sweeney, Robert William Anderson
  • Publication number: 20220319054
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting scene flow. One of the methods includes obtaining a current point cloud representing an observed scene at a current time point; obtaining object label data that identifies a first three-dimensional region in the observed scene; determining, for each current three-dimensional point that is within the first three-dimensional region and using the object label data, a respective preceding position of the current three-dimensional point at a preceding time point in a reference frame of the sensor at the current time point; and generating, using the preceding positions, a scene flow label for the current point cloud that comprises a respective ground truth motion vector for each of a plurality of the current three-dimensional points.
    Type: Application
    Filed: March 1, 2022
    Publication date: October 6, 2022
    Inventors: Nichola Abdo, Jonathon Shlens, Zhifeng Chen, Christopher John Sweeney, Philipp Florian Jund