Patents by Inventor Dillon Cower

Dillon Cower has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230230330
    Abstract: Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
    Type: Application
    Filed: January 24, 2023
    Publication date: July 20, 2023
    Inventors: Dillon Cower, Nirmal Patel
  • Publication number: 20230088308
    Abstract: A method includes receiving a first facial framework and a first captured image of a face. The first facial framework corresponds to the face at a first frame and includes a first facial mesh of facial information. The method also includes projecting the first captured image onto the first facial framework and determining a facial texture corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework at a second frame that includes a second facial mesh of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar. The three-dimensional avatar corresponds to a virtual representation of the face.
    Type: Application
    Filed: November 23, 2022
    Publication date: March 23, 2023
    Applicant: Google LLC
    Inventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arun Kandoor, Dillon Cower
  • Patent number: 11593996
    Abstract: Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: February 28, 2023
    Assignee: Waymo LLC
    Inventors: Dillon Cower, Nirmal Patel
  • Publication number: 20230051565
    Abstract: A method for determining hard example sensor data inputs for training a task neural network is described. The task neural network is configured to receive a sensor data input and to generate a respective output for the sensor data input to perform a machine learning task. The method includes: receiving one or more sensor data inputs depicting a same scene of an environment, wherein the one or more sensor data inputs are taken during a predetermined time period; generating a plurality of predictions about a characteristic of an object of the scene; determining a level of inconsistency between the plurality of predictions; determining that the level of inconsistency exceeds a threshold level; and in response to the determining that the level of inconsistency exceeds a threshold level, determining that the one or more sensor data inputs comprise a hard example sensor data input.
    Type: Application
    Filed: August 10, 2021
    Publication date: February 16, 2023
    Inventors: Dillon Cower, Timothy Yang, Kunlong Gu, Marshall Friend Tappen
  • Patent number: 11538211
    Abstract: A method (300) includes receiving a first facial framework (144a) and a first captured image (130a) of a face (20). The first facial framework corresponds to the face at a first frame and includes a first facial mesh (142a) of facial information (140). The method also includes projecting the first captured image onto the first facial framework and determining a facial texture (212) corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework (144b) at a second frame that includes a second facial mesh (142b) of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar (160). The three-dimensional avatar corresponds to a virtual representation of the face.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: December 27, 2022
    Assignee: Google LLC
    Inventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arun Kandoor, Dillon Cower
  • Publication number: 20220279191
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to encode a video. A method includes capturing a video frame that includes a face of a person. The method further includes detecting a face in the video frame. The method further includes segmenting the video frame into a plurality of rectangles, the plurality of rectangles including a face rectangle with pixels corresponding to the face in the video frame. The method further includes packing the video frame based on the plurality of rectangles, wherein a greater number of pixels are allocated to the face rectangle as compared to other rectangles in the plurality of rectangles. The method further includes encoding the video frame with metadata describing the packing.
    Type: Application
    Filed: October 31, 2019
    Publication date: September 1, 2022
    Applicant: Google LLC
    Inventor: Dillon COWER
  • Publication number: 20220254108
    Abstract: Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
    Type: Application
    Filed: February 9, 2021
    Publication date: August 11, 2022
    Inventors: Dillon Cower, Nirmal Patel
  • Publication number: 20220237859
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.
    Type: Application
    Filed: April 18, 2022
    Publication date: July 28, 2022
    Applicant: Google LLC
    Inventors: Guangyu Zhou, Dillon Cower
  • Publication number: 20220222968
    Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.
    Type: Application
    Filed: March 29, 2022
    Publication date: July 14, 2022
    Applicant: Google LLC
    Inventor: Dillon COWER
  • Publication number: 20220180549
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting three-dimensional object locations from images. One of the methods includes obtaining a sequence of images that comprises, at each of a plurality of time steps, a respective image that was captured by a camera at the time step; generating, for each image in the sequence, respective pseudo-lidar features of a respective pseudo-lidar representation of a region in the image that has been determined to depict a first object; generating, for a particular image at a particular time step in the sequence, image patch features of the region in the particular image that has been determined to depict the first object; and generating, from the respective pseudo-lidar features and the image patch features, a prediction that characterizes a location of the first object in a three-dimensional coordinate system at the particular time step in the sequence.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 9, 2022
    Inventors: Longlong Jing, Ruichi Yu, Jiyang Gao, Henrik Kretzschmar, Kang Li, Ruizhongtai Qi, Hang Zhao, Alper Ayvaci, Xu Chen, Dillon Cower, Congcong Li
  • Patent number: 11335057
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: May 17, 2022
    Assignee: Google LLC
    Inventors: Guangyu Zhou, Dillon Cower
  • Patent number: 11321555
    Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: May 3, 2022
    Assignee: Google LLC
    Inventor: Dillon Cower
  • Publication number: 20210182540
    Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.
    Type: Application
    Filed: November 11, 2019
    Publication date: June 17, 2021
    Applicant: Google LLC
    Inventor: Dillon COWER
  • Publication number: 20210118223
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.
    Type: Application
    Filed: December 24, 2020
    Publication date: April 22, 2021
    Applicant: Google LLC
    Inventors: Guangyu Zhou, Dillon Cower
  • Publication number: 20210056747
    Abstract: A method (300) includes receiving a first facial framework (144a) and a first captured image (130a) of a face (20) The first facial framework corresponds to the face at a first frame and includes a first facial mesh (142a) of facial information (140). The method also includes projecting the first captured image onto the first facial framework and determining a facial texture (212) corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework (144b) at a second frame that includes a second facial mesh (142b) of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar (160). The three-dimensional avatar corresponds to a virtual representation of the face.
    Type: Application
    Filed: May 1, 2019
    Publication date: February 25, 2021
    Applicant: Google LLC
    Inventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arrun Kandoor, Dillon Cower
  • Patent number: 10916051
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: February 9, 2021
    Assignee: Google LLC
    Inventors: Guangyu Zhou, Dillon Cower
  • Publication number: 20210012560
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.
    Type: Application
    Filed: July 8, 2019
    Publication date: January 14, 2021
    Applicant: Google LLC
    Inventors: Guangyu Zhou, Dillon Cower