Patents by Inventor Dillon Cower
Dillon Cower has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230230330Abstract: Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.Type: ApplicationFiled: January 24, 2023Publication date: July 20, 2023Inventors: Dillon Cower, Nirmal Patel
-
Publication number: 20230088308Abstract: A method includes receiving a first facial framework and a first captured image of a face. The first facial framework corresponds to the face at a first frame and includes a first facial mesh of facial information. The method also includes projecting the first captured image onto the first facial framework and determining a facial texture corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework at a second frame that includes a second facial mesh of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar. The three-dimensional avatar corresponds to a virtual representation of the face.Type: ApplicationFiled: November 23, 2022Publication date: March 23, 2023Applicant: Google LLCInventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arun Kandoor, Dillon Cower
-
Patent number: 11593996Abstract: Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.Type: GrantFiled: February 9, 2021Date of Patent: February 28, 2023Assignee: Waymo LLCInventors: Dillon Cower, Nirmal Patel
-
Publication number: 20230051565Abstract: A method for determining hard example sensor data inputs for training a task neural network is described. The task neural network is configured to receive a sensor data input and to generate a respective output for the sensor data input to perform a machine learning task. The method includes: receiving one or more sensor data inputs depicting a same scene of an environment, wherein the one or more sensor data inputs are taken during a predetermined time period; generating a plurality of predictions about a characteristic of an object of the scene; determining a level of inconsistency between the plurality of predictions; determining that the level of inconsistency exceeds a threshold level; and in response to the determining that the level of inconsistency exceeds a threshold level, determining that the one or more sensor data inputs comprise a hard example sensor data input.Type: ApplicationFiled: August 10, 2021Publication date: February 16, 2023Inventors: Dillon Cower, Timothy Yang, Kunlong Gu, Marshall Friend Tappen
-
Patent number: 11538211Abstract: A method (300) includes receiving a first facial framework (144a) and a first captured image (130a) of a face (20). The first facial framework corresponds to the face at a first frame and includes a first facial mesh (142a) of facial information (140). The method also includes projecting the first captured image onto the first facial framework and determining a facial texture (212) corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework (144b) at a second frame that includes a second facial mesh (142b) of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar (160). The three-dimensional avatar corresponds to a virtual representation of the face.Type: GrantFiled: May 1, 2019Date of Patent: December 27, 2022Assignee: Google LLCInventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arun Kandoor, Dillon Cower
-
Publication number: 20220279191Abstract: Implementations described herein relate to methods, systems, and computer-readable media to encode a video. A method includes capturing a video frame that includes a face of a person. The method further includes detecting a face in the video frame. The method further includes segmenting the video frame into a plurality of rectangles, the plurality of rectangles including a face rectangle with pixels corresponding to the face in the video frame. The method further includes packing the video frame based on the plurality of rectangles, wherein a greater number of pixels are allocated to the face rectangle as compared to other rectangles in the plurality of rectangles. The method further includes encoding the video frame with metadata describing the packing.Type: ApplicationFiled: October 31, 2019Publication date: September 1, 2022Applicant: Google LLCInventor: Dillon COWER
-
Publication number: 20220254108Abstract: Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.Type: ApplicationFiled: February 9, 2021Publication date: August 11, 2022Inventors: Dillon Cower, Nirmal Patel
-
Publication number: 20220237859Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.Type: ApplicationFiled: April 18, 2022Publication date: July 28, 2022Applicant: Google LLCInventors: Guangyu Zhou, Dillon Cower
-
Publication number: 20220222968Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.Type: ApplicationFiled: March 29, 2022Publication date: July 14, 2022Applicant: Google LLCInventor: Dillon COWER
-
Publication number: 20220180549Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for predicting three-dimensional object locations from images. One of the methods includes obtaining a sequence of images that comprises, at each of a plurality of time steps, a respective image that was captured by a camera at the time step; generating, for each image in the sequence, respective pseudo-lidar features of a respective pseudo-lidar representation of a region in the image that has been determined to depict a first object; generating, for a particular image at a particular time step in the sequence, image patch features of the region in the particular image that has been determined to depict the first object; and generating, from the respective pseudo-lidar features and the image patch features, a prediction that characterizes a location of the first object in a three-dimensional coordinate system at the particular time step in the sequence.Type: ApplicationFiled: December 8, 2021Publication date: June 9, 2022Inventors: Longlong Jing, Ruichi Yu, Jiyang Gao, Henrik Kretzschmar, Kang Li, Ruizhongtai Qi, Hang Zhao, Alper Ayvaci, Xu Chen, Dillon Cower, Congcong Li
-
Patent number: 11335057Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.Type: GrantFiled: December 24, 2020Date of Patent: May 17, 2022Assignee: Google LLCInventors: Guangyu Zhou, Dillon Cower
-
Patent number: 11321555Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.Type: GrantFiled: November 11, 2019Date of Patent: May 3, 2022Assignee: Google LLCInventor: Dillon Cower
-
Publication number: 20210182540Abstract: A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.Type: ApplicationFiled: November 11, 2019Publication date: June 17, 2021Applicant: Google LLCInventor: Dillon COWER
-
Publication number: 20210118223Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.Type: ApplicationFiled: December 24, 2020Publication date: April 22, 2021Applicant: Google LLCInventors: Guangyu Zhou, Dillon Cower
-
Publication number: 20210056747Abstract: A method (300) includes receiving a first facial framework (144a) and a first captured image (130a) of a face (20) The first facial framework corresponds to the face at a first frame and includes a first facial mesh (142a) of facial information (140). The method also includes projecting the first captured image onto the first facial framework and determining a facial texture (212) corresponding to the face based on the projected first captured image. The method also includes receiving a second facial framework (144b) at a second frame that includes a second facial mesh (142b) of facial information and updating the facial texture based on the received second facial framework. The method also includes displaying the updated facial texture as a three-dimensional avatar (160). The three-dimensional avatar corresponds to a virtual representation of the face.Type: ApplicationFiled: May 1, 2019Publication date: February 25, 2021Applicant: Google LLCInventors: Tarek Hefny, Nicholas Reiter, Brandon Young, Arrun Kandoor, Dillon Cower
-
Patent number: 10916051Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.Type: GrantFiled: July 8, 2019Date of Patent: February 9, 2021Assignee: Google LLCInventors: Guangyu Zhou, Dillon Cower
-
Publication number: 20210012560Abstract: Implementations described herein relate to methods, systems, and computer-readable media to relight a video. In some implementations, a computer-implemented method includes receiving a plurality of frames of a video. Each video frame includes depth data and color data for a plurality of pixels. The method further includes segmenting each frame based on the depth data to classify each pixel as a foreground pixel or a background pixel. The method further includes setting depth value of each background pixel to a fixed depth value and applying a Gaussian filter to smooth depth value for the plurality of pixels. The method further includes calculating surface normals based on the depth values of the plurality of pixels. The method further includes rendering a relighted frame by adding a virtual light based on the surface normals and the color data.Type: ApplicationFiled: July 8, 2019Publication date: January 14, 2021Applicant: Google LLCInventors: Guangyu Zhou, Dillon Cower