Patents by Inventor Forrester H. Cole

Forrester H. Cole has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240013497
    Abstract: A computing system and method can be used to render a 3D shape from one or more images. In particular, the present disclosure provides a general pipeline for learning articulated shape reconstruction from images (LASR). The pipeline can reconstruct rigid or nonrigid 3D shapes. In particular, the pipeline can automatically decompose non-rigidly deforming shapes into rigid motions near rigid-bones. This pipeline incorporates an analysis-by-synthesis strategy and forward-renders silhouette, optical flow, and color images which can be compared against the video observations to adjust the internal parameters of the model. By inverting a rendering pipeline and incorporating optical flow, the pipeline can recover a mesh of a 3D model from the one or more images input by a user.
    Type: Application
    Filed: December 21, 2020
    Publication date: January 11, 2024
    Inventors: Deqing Sun, Varun Jampani, Gengshan Yang, Daniel Vlasic, Huiwen Chang, Forrester H. Cole, Ce Liu, William Tafel Freeman
  • Publication number: 20230298269
    Abstract: Systems and methods of the present disclosure are directed to a method that can include obtaining a 3D mesh comprising polygons and texture/shading data. The method can include rasterizing the 3D mesh to obtain a 2D raster comprising pixels and coordinates respectively associated with a subset of pixels. The method can include determining an initial color value for the subset of pixels based on the coordinates of the pixel and the associated shading/texture data. The method can include constructing a splat at the coordinates of a respective pixel. The method can include determining an updated color value for a respective pixel based on a weighting of the subset of splats to generate a 2D rendering of the 3D mesh based on the coordinates of a pixel and a splat.
    Type: Application
    Filed: August 31, 2020
    Publication date: September 21, 2023
    Inventors: Kyle Adam Genova, Daniel Vlasic, Forrester H. Cole
  • Publication number: 20230206955
    Abstract: A computer-implemented method for decomposing videos into multiple layers (212, 213) that can be re-combined with modified relative timings includes obtaining video data including a plurality of image frames (201) depicting one or more objects. For each of the plurality of frames, the computer-implemented method includes generating one or more object maps descriptive of a respective location of at least one object of the one or more objects within the image frame. For each of the plurality of frames, the computer-implemented method includes inputting the image frame and the one or more object maps into a machine-learned layer Tenderer model. (220) For each of the plurality of frames, the computer-implemented method includes receiving, as output from the machine-learned layer Tenderer model, a background layer illustrative of a background of the video data and one or more object layers respectively associated with one of the one or more object maps.
    Type: Application
    Filed: May 22, 2020
    Publication date: June 29, 2023
    Inventors: Forrester H. Cole, Erika Lu, Tali Dekel, William T. Freeman, David Henry Salesin, Michael Rubinstein
  • Publication number: 20220270402
    Abstract: The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.
    Type: Application
    Filed: May 16, 2022
    Publication date: August 25, 2022
    Inventors: Forrester H. Cole, Dilip Krishnan, William T. Freeman, David Benjamin Belanger
  • Patent number: 11335120
    Abstract: The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: May 17, 2022
    Assignee: GOOGLE LLC
    Inventors: Forrester H. Cole, Dilip Krishnan, William T. Freeman, David Benjamin Belanger
  • Publication number: 20200257891
    Abstract: The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.
    Type: Application
    Filed: April 24, 2020
    Publication date: August 13, 2020
    Inventors: Forrester H. Cole, Dilip Krishnan, William T. Freeman, David Benjamin Belanger
  • Patent number: 10650227
    Abstract: The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: May 12, 2020
    Assignee: Google LLC
    Inventors: Forrester H. Cole, Dilip Krishnan, William T. Freeman, David Benjamin Belanger
  • Patent number: 10510180
    Abstract: Methods, systems, and apparatus for obtaining first image features derived from an image of an object, providing the first image features to a three-dimensional estimator neural network, and obtaining, from the three-dimensional estimator neural network, data specifying an estimated three-dimensional shape and texture based on the first image features. The estimated three-dimensional shape and texture are provided to a three-dimensional rendering engine, and a plurality of three-dimensional views of the object are generated by the three-dimensional rendering engine based on the estimated three-dimensional shape and texture. The plurality of three-dimensional views are provided to the object recognition engine, and second image features derived from the plurality of three-dimensional views are obtained from the object recognition engine. A loss is computed based at least on the first and second image features, and the three-dimensional estimator neural network is trained based at least on the computed loss.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: December 17, 2019
    Assignee: Google LLC
    Inventors: Forrester H. Cole, Kyle Genova
  • Publication number: 20190340808
    Abstract: Methods, systems, and apparatus for obtaining first image features derived from an image of an object, providing the first image features to a three-dimensional estimator neural network, and obtaining, from the three-dimensional estimator neural network, data specifying an estimated three-dimensional shape and texture based on the first image features. The estimated three-dimensional shape and texture are provided to a three-dimensional rendering engine, and a plurality of three-dimensional views of the object are generated by the three-dimensional rendering engine based on the estimated three-dimensional shape and texture. The plurality of three-dimensional views are provided to the object recognition engine, and second image features derived from the plurality of three-dimensional views are obtained from the object recognition engine. A loss is computed based at least on the first and second image features, and the three-dimensional estimator neural network is trained based at least on the computed loss.
    Type: Application
    Filed: July 17, 2019
    Publication date: November 7, 2019
    Inventors: Forrester H. Cole, Kyle Genova
  • Patent number: 10403031
    Abstract: Methods, systems, and apparatus for obtaining first image features derived from an image of an object, providing the first image features to a three-dimensional estimator neural network, and obtaining, from the three-dimensional estimator neural network, data specifying an estimated three-dimensional shape and texture based on the first image features. The estimated three-dimensional shape and texture are provided to a three-dimensional rendering engine, and a plurality of three-dimensional views of the object are generated by the three-dimensional rendering engine based on the estimated three-dimensional shape and texture. The plurality of three-dimensional views are provided to the object recognition engine, and second image features derived from the plurality of three-dimensional views are obtained from the object recognition engine. A loss is computed based at least on the first and second image features, and the three-dimensional estimator neural network is trained based at least on the computed loss.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: September 3, 2019
    Assignee: Google LLC
    Inventors: Forrester H. Cole, Kyle Genova
  • Publication number: 20190147642
    Abstract: Methods, systems, and apparatus for obtaining first image features derived from an image of an object, providing the first image features to a three-dimensional estimator neural network, and obtaining, from the three-dimensional estimator neural network, data specifying an estimated three-dimensional shape and texture based on the first image features. The estimated three-dimensional shape and texture are provided to a three-dimensional rendering engine, and a plurality of three-dimensional views of the object are generated by the three-dimensional rendering engine based on the estimated three-dimensional shape and texture. The plurality of three-dimensional views are provided to the object recognition engine, and second image features derived from the plurality of three-dimensional views are obtained from the object recognition engine. A loss is computed based at least on the first and second image features, and the three-dimensional estimator neural network is trained based at least on the computed loss.
    Type: Application
    Filed: November 15, 2017
    Publication date: May 16, 2019
    Inventors: Forrester H. Cole, Kyle Genova
  • Publication number: 20190095698
    Abstract: The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Inventors: Forrester H. Cole, Dilip Krishnan, William T. Freeman, David Benjamin Belanger