Patents by Inventor Forrester COLE
Forrester COLE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240242366Abstract: A method includes determining, based on a first image, a first depth of a first pixel and, based on a second image, a second depth of a second pixel that corresponds to the first pixel. The method also includes determining a first 3D point based on the first depth and a second 3D point based on the second depth, and determining a scene flow between the first and second images. The method additionally includes determining an induced pixel position based on a post-flow 3D point representing the first 3D point displaced according to the scene flow, determining a flow loss value based on the induced pixel position and a position of the second pixel and a depth loss value based on the post-flow 3D point and the second 3D point, and adjusting the depth model or the scene flow model based on the flow and depth loss values.Type: ApplicationFiled: July 2, 2021Publication date: July 18, 2024Inventors: Forrester Cole, Zhoutong Zhang, Tali Dekel, William T. Freeman
-
Patent number: 11978225Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: GrantFiled: April 17, 2023Date of Patent: May 7, 2024Assignee: Google LLCInventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Publication number: 20230260145Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: ApplicationFiled: April 17, 2023Publication date: August 17, 2023Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Patent number: 11663733Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: GrantFiled: March 23, 2022Date of Patent: May 30, 2023Assignee: Google LLCInventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Publication number: 20220215568Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: ApplicationFiled: March 23, 2022Publication date: July 7, 2022Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Patent number: 11315274Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: GrantFiled: September 20, 2019Date of Patent: April 26, 2022Assignee: Google LLCInventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Publication number: 20210090279Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: ApplicationFiled: September 20, 2019Publication date: March 25, 2021Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Patent number: 10853987Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.Type: GrantFiled: December 3, 2019Date of Patent: December 1, 2020Assignee: Google LLCInventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
-
Publication number: 20200175740Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.Type: ApplicationFiled: December 3, 2019Publication date: June 4, 2020Applicant: Google LLCInventors: Aaron SARNA, Dilip KRISHNAN, Forrester COLE, Inbar MOSSERI
-
Patent number: 10529115Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.Type: GrantFiled: March 14, 2018Date of Patent: January 7, 2020Assignee: Google LLCInventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
-
Publication number: 20180268595Abstract: A system and method for generating cartoon images from photos are described. The method includes receiving an image of a user, determining a template for a cartoon avatar, determining an attribute needed for the template, processing the image with a classifier trained for classifying the attribute included in the image, determining a label generated by the classifier for the attribute, determining a cartoon asset for the attribute based on the label, and rendering the cartoon avatar personifying the user using the cartoon asset.Type: ApplicationFiled: March 14, 2018Publication date: September 20, 2018Inventors: Aaron Sarna, Dilip Krishnan, Forrester Cole, Inbar Mosseri
-
Patent number: 9665955Abstract: Techniques relate to fitting a shape of an object when placed in a desired pose. For example, a plurality of training poses can be received, wherein each training pose is associated with a training shape. The training poses can be clustered in pose space, and a bid point can be determined for each cluster. A cluster-fitted shape can then be determined for a pose at the bid point using the training shapes in the cluster. A weight for each cluster-fitted shape can then be determined. The cluster-fitted shapes can then be combined using the determined weights to determine a shape of the object in the desired pose.Type: GrantFiled: September 30, 2014Date of Patent: May 30, 2017Assignee: PixarInventors: Mark Meyer, Forrester Cole
-
Patent number: 9519988Abstract: A method of animation of surface deformation and wrinkling, such as on clothing, uses low-dimensional linear subspaces with temporally adapted bases to reduce computation. Full space simulation training data is used to construct a pool of low-dimensional bases across a pose space. For simulation, sets of basis vectors are selected based on the current pose of the character and the state of its clothing, using an adaptive scheme. Modifying the surface configuration comprises solving reduced system matrices with respect to the subspace of the adapted basis.Type: GrantFiled: September 30, 2014Date of Patent: December 13, 2016Assignees: PIXAR, ETH ZÜRICH, Disney Enterprises, Inc.Inventors: Robert Sumner, Fabian Hahn, Bernhard Thomaszewski, Stelian Coros, Forrester Cole, Mark Meyer, Anthony Derose, Markus Gross
-
Publication number: 20160093084Abstract: A method of animation of surface deformation and wrinkling, such as on clothing, uses low-dimensional linear subspaces with temporally adapted bases to reduce computation. Full space simulation training data is used to construct a pool of low-dimensional bases across a pose space. For simulation, sets of basis vectors are selected based on the current pose of the character and the state of its clothing, using an adaptive scheme. Modifying the surface configuration comprises solving reduced system matrices with respect to the subspace of the adapted basis.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Applicant: PIXARInventors: ROBERT SUMNER, FABIAN HAHN, BERNHARD THOMASZEWSKI, STELIAN COROS, FORRESTER COLE, MARK MEYER, ANTHONY DEROSE, MARKUS GROSS
-
Patent number: 9076258Abstract: The disclosure provides an approach for stylizing animations to synthesize example textures. In one embodiment, a synthesis application down-samples input and style buffers. To obtain a sequence of offset fields, each of which takes pixels in the output stylized frame to corresponding pixels in the stylized example image, the synthesis application may optimize each frame of the animation at level l?1, then advect the results of a previous frame to a next frame using velocity fields. After having processed the entire animation sequence forward through time, a similar sweep may be performed backwards. Then, the resulting offset fields may be up-sampled to level l and used as the starting point for optimization at that finer level of detail. This process may be repeated until returning to the original sampling, which yields the final output.Type: GrantFiled: March 14, 2013Date of Patent: July 7, 2015Assignee: PIXARInventors: Michael Kass, Pierre Benard, Forrester Cole
-
Publication number: 20140267350Abstract: The disclosure provides an approach for stylizing animations to synthesize example textures. In one embodiment, a synthesis application down-samples input and style buffers. To obtain a sequence of offset fields, each of which takes pixels in the output stylized frame to corresponding pixels in the stylized example image, the synthesis application may optimize each frame of the animation at level l?1, then advect the results of a previous frame to a next frame using velocity fields. After having processed the entire animation sequence forward through time, a similar sweep may be performed backwards. Then, the resulting offset fields may be up-sampled to level l and used as the starting point for optimization at that finer level of detail. This process may be repeated until returning to the original sampling, which yields the final output.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: PIXARInventors: Michael KASS, Pierre BENARD, Forrester COLE