Patents by Inventor Forrester COLE

Forrester COLE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240242366
    Abstract: A method includes determining, based on a first image, a first depth of a first pixel and, based on a second image, a second depth of a second pixel that corresponds to the first pixel. The method also includes determining a first 3D point based on the first depth and a second 3D point based on the second depth, and determining a scene flow between the first and second images. The method additionally includes determining an induced pixel position based on a post-flow 3D point representing the first 3D point displaced according to the scene flow, determining a flow loss value based on the induced pixel position and a position of the second pixel and a depth loss value based on the post-flow 3D point and the second 3D point, and adjusting the depth model or the scene flow model based on the flow and depth loss values.
    Type: Application
    Filed: July 2, 2021
    Publication date: July 18, 2024
    Inventors: Forrester Cole, Zhoutong Zhang, Tali Dekel, William T. Freeman
  • Patent number: 11978225
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: April 17, 2023
    Date of Patent: May 7, 2024
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Publication number: 20230260145
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Application
    Filed: April 17, 2023
    Publication date: August 17, 2023
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11663733
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: May 30, 2023
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Publication number: 20220215568
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Application
    Filed: March 23, 2022
    Publication date: July 7, 2022
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11315274
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: April 26, 2022
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Publication number: 20210090279
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 25, 2021
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 10853987
    Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: December 1, 2020
    Assignee: Google LLC
    Inventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
  • Publication number: 20200175740
    Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.
    Type: Application
    Filed: December 3, 2019
    Publication date: June 4, 2020
    Applicant: Google LLC
    Inventors: Aaron SARNA, Dilip KRISHNAN, Forrester COLE, Inbar MOSSERI
  • Patent number: 10529115
    Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: January 7, 2020
    Assignee: Google LLC
    Inventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
  • Publication number: 20180268595
    Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.
    Type: Application
    Filed: March 14, 2018
    Publication date: September 20, 2018
    Inventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
  • Patent number: 9665955
    Abstract: Techniques relate to fitting a shape of an object when placed in a desired pose. For example, a plurality of training poses can be received, wherein each training pose is associated with a training shape. The training poses can be clustered in pose space, and a bid point can be determined for each cluster. A cluster-fitted shape can then be determined for a pose at the bid point using the training shapes in the cluster. A weight for each cluster-fitted shape can then be determined. The cluster-fitted shapes can then be combined using the determined weights to determine a shape of the object in the desired pose.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: May 30, 2017
    Assignee: Pixar
    Inventors: Mark Meyer, Forrester Cole
  • Patent number: 9519988
    Abstract: A method of animation of surface deformation and wrinkling, such as on clothing, uses low-dimensional linear subspaces with temporally adapted bases to reduce computation. Full space simulation training data is used to construct a pool of low-dimensional bases across a pose space. For simulation, sets of basis vectors are selected based on the current pose of the character and the state of its clothing, using an adaptive scheme. Modifying the surface configuration comprises solving reduced system matrices with respect to the subspace of the adapted basis.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: December 13, 2016
    Assignees: PIXAR, ETH ZÜRICH, Disney Enterprises, Inc.
    Inventors: Robert Sumner, Fabian Hahn, Bernhard Thomaszewski, Stelian Coros, Forrester Cole, Mark Meyer, Anthony Derose, Markus Gross
  • Publication number: 20160093084
    Abstract: A method of animation of surface deformation and wrinkling, such as on clothing, uses low-dimensional linear subspaces with temporally adapted bases to reduce computation. Full space simulation training data is used to construct a pool of low-dimensional bases across a pose space. For simulation, sets of basis vectors are selected based on the current pose of the character and the state of its clothing, using an adaptive scheme. Modifying the surface configuration comprises solving reduced system matrices with respect to the subspace of the adapted basis.
    Type: Application
    Filed: September 30, 2014
    Publication date: March 31, 2016
    Applicant: PIXAR
    Inventors: ROBERT SUMNER, FABIAN HAHN, BERNHARD THOMASZEWSKI, STELIAN COROS, FORRESTER COLE, MARK MEYER, ANTHONY DEROSE, MARKUS GROSS
  • Patent number: 9076258
    Abstract: The disclosure provides an approach for stylizing animations to synthesize example textures. In one embodiment, a synthesis application down-samples input and style buffers. To obtain a sequence of offset fields, each of which takes pixels in the output stylized frame to corresponding pixels in the stylized example image, the synthesis application may optimize each frame of the animation at level l?1, then advect the results of a previous frame to a next frame using velocity fields. After having processed the entire animation sequence forward through time, a similar sweep may be performed backwards. Then, the resulting offset fields may be up-sampled to level l and used as the starting point for optimization at that finer level of detail. This process may be repeated until returning to the original sampling, which yields the final output.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: July 7, 2015
    Assignee: PIXAR
    Inventors: Michael Kass, Pierre Benard, Forrester Cole
  • Publication number: 20140267350
    Abstract: The disclosure provides an approach for stylizing animations to synthesize example textures. In one embodiment, a synthesis application down-samples input and style buffers. To obtain a sequence of offset fields, each of which takes pixels in the output stylized frame to corresponding pixels in the stylized example image, the synthesis application may optimize each frame of the animation at level l?1, then advect the results of a previous frame to a next frame using velocity fields. After having processed the entire animation sequence forward through time, a similar sweep may be performed backwards. Then, the resulting offset fields may be up-sampled to level l and used as the starting point for optimization at that finer level of detail. This process may be repeated until returning to the original sampling, which yields the final output.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 18, 2014
    Applicant: PIXAR
    Inventors: Michael KASS, Pierre BENARD, Forrester COLE