Patents by Inventor Noah Snavely
Noah Snavely has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11978225Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: GrantFiled: April 17, 2023Date of Patent: May 7, 2024Assignee: Google LLCInventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Publication number: 20230260145Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: ApplicationFiled: April 17, 2023Publication date: August 17, 2023Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Patent number: 11663733Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: GrantFiled: March 23, 2022Date of Patent: May 30, 2023Assignee: Google LLCInventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Publication number: 20220215568Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: ApplicationFiled: March 23, 2022Publication date: July 7, 2022Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Patent number: 11315274Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: GrantFiled: September 20, 2019Date of Patent: April 26, 2022Assignee: Google LLCInventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Patent number: 11288857Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.Type: GrantFiled: April 1, 2020Date of Patent: March 29, 2022Assignee: Google LLCInventors: Moustafa Meshry, Ricardo Martin Brualla, Sameh Khamis, Daniel Goldman, Hugues Hoppe, Noah Snavely, Rohit Pandey
-
Publication number: 20210090279Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.Type: ApplicationFiled: September 20, 2019Publication date: March 25, 2021Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
-
Publication number: 20200320777Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.Type: ApplicationFiled: April 1, 2020Publication date: October 8, 2020Inventors: Moustafa Meshry, Ricardo Martin Brualla, Sameh Khamis, Daniel Goldman, Hugues Hoppe, Noah Snavely, Rohit Pandey
-
Patent number: 10681325Abstract: A system creates an output image of a scene using two-dimensional (2D) images of the scene. For a pixel in the output image, the system identifies, in the output image, 2D fragments that correspond to the pixel. The system converts the 2D fragments into three dimensional (3D) fragments, creates volume spans for the pixel based on the 3D fragments, determines a color of a volume span based on color contribution of respective one or more of the 3D fragments for the volume span, and determines a color of the pixel for the output image from determined colors of the volume spans.Type: GrantFiled: May 16, 2016Date of Patent: June 9, 2020Assignee: Google LLCInventors: Janne Kontkanen, Noah Snavely
-
Publication number: 20170332063Abstract: A system creates an output image of a scene using two-dimensional (2D) images of the scene. For a pixel in the output image, the system identifies, in the output image, 2D fragments that correspond to the pixel. The system converts the 2D fragments into three dimensional (3D) fragments, creates volume spans for the pixel based on the 3D fragments, determines a color of a volume span based on color contribution of respective one or more of the 3D fragments for the volume span, and determines a color of the pixel for the output image from determined colors of the volume spans.Type: ApplicationFiled: May 16, 2016Publication date: November 16, 2017Inventors: Janne Kontkanen, Noah Snavely
-
Patent number: 9531952Abstract: Aspects of this disclosure relate to generating a composite image of an image of and another image that has a wider field of view. After an image is selected, the visual features in the image may be identified. Several images, such as panoramas, which have wider fields of view than an image captured by a camera, may be selected according to a comparison of the visual features in the image and the visual features of the larger images. The image may be aligned with each of the larger images, and at least one of these smaller-larger image pairs may be generated as a composite image.Type: GrantFiled: March 27, 2015Date of Patent: December 27, 2016Assignee: Google Inc.Inventors: Keith Noah Snavely, Pierre Georgel, Steven Maxwell Seitz
-
Publication number: 20160286122Abstract: Aspects of this disclosure relate to generating a composite image of an image of and another image that has a wider field of view. After an image is selected, the visual features in the image may be identified. Several images, such as panoramas, which have wider fields of view than an image captured by a camera, may be selected according to a comparison of the visual features in the image and the visual features of the larger images. The image may be aligned with each of the larger images, and at least one of these smaller-larger image pairs may be generated as a composite image.Type: ApplicationFiled: March 27, 2015Publication date: September 29, 2016Inventors: Keith Noah Snavely, Pierre Georgel, Steven Maxwell Seitz
-
Patent number: 9418482Abstract: Aspects of the disclosure relate to identifying visited travel destinations from a set of digital images associated with users of a social networking system. For example, one or more computing devices provide access to an individual user's account, including the individual user and other users affiliated with the individual user via the social networking system. One or more digital images are received from a computing device associated with the individual user and from one or more second computing devices associated with the other users of the social networking system. From each digital image, a geo-location is determined for each digital image. The one or more computing devices display each geo-located image on a map at a position corresponding to the determined geo-location for the geo-located image.Type: GrantFiled: May 28, 2014Date of Patent: August 16, 2016Assignee: Google Inc.Inventors: Tsung-Lin Yang, Bryce Evans, Keith Noah Snavely, Yihui Xie, Andrew C. Gallagher
-
Patent number: 9324151Abstract: System and methods for determining where a digital photograph was taken by estimating the camera pose with respect to a global scale three-dimensional database. Accurate location and orientation of the digital photograph is established through feature correspondence and geometry estimated from photograph collections.Type: GrantFiled: December 8, 2012Date of Patent: April 26, 2016Assignee: Cornell UniversityInventors: Noah Snavely, Daniel Huttenlocher, Yungpeng Li
-
Publication number: 20140314322Abstract: System and methods for determining where a digital photograph was taken by estimating the camera pose with respect to a global scale three-dimensional database. Accurate location and orientation of the digital photograph is established through feature correspondence and geometry estimated from photograph collections.Type: ApplicationFiled: December 8, 2012Publication date: October 23, 2014Applicant: Cornell UniversityInventors: Noah Snavely, Daniel Huttenlocher, Yungpeng Li
-
Patent number: 8744214Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.Type: GrantFiled: May 21, 2013Date of Patent: June 3, 2014Assignees: Microsoft Corporation, University of WashingtonInventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
-
Publication number: 20130254666Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .scas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.Type: ApplicationFiled: May 21, 2013Publication date: September 26, 2013Applicants: University of Washington, Microsoft CorporationInventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
-
Patent number: 8515159Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.Type: GrantFiled: March 14, 2012Date of Patent: August 20, 2013Assignees: Microsoft Corporation, University of WashingtonInventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
-
Patent number: 8463071Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.Type: GrantFiled: March 13, 2012Date of Patent: June 11, 2013Assignees: Microsoft Corporation, University of WashingtonInventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
-
Publication number: 20120169734Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.Type: ApplicationFiled: March 14, 2012Publication date: July 5, 2012Applicants: University of Washington, Microsoft CorporationInventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski