Patents by Inventor Noah Snavely

Noah Snavely has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230260145
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Application
    Filed: April 17, 2023
    Publication date: August 17, 2023
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11663733
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: May 30, 2023
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Publication number: 20220215568
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Application
    Filed: March 23, 2022
    Publication date: July 7, 2022
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11315274
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: April 26, 2022
    Assignee: Google LLC
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Patent number: 11288857
    Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Google LLC
    Inventors: Moustafa Meshry, Ricardo Martin Brualla, Sameh Khamis, Daniel Goldman, Hugues Hoppe, Noah Snavely, Rohit Pandey
  • Publication number: 20210090279
    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 25, 2021
    Inventors: Tali Dekel, Forrester Cole, Ce Liu, William Freeman, Richard Tucker, Noah Snavely, Zhengqi Li
  • Publication number: 20200320777
    Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.
    Type: Application
    Filed: April 1, 2020
    Publication date: October 8, 2020
    Inventors: Moustafa Meshry, Ricardo Martin Brualla, Sameh Khamis, Daniel Goldman, Hugues Hoppe, Noah Snavely, Rohit Pandey
  • Patent number: 10681325
    Abstract: A system creates an output image of a scene using two-dimensional (2D) images of the scene. For a pixel in the output image, the system identifies, in the output image, 2D fragments that correspond to the pixel. The system converts the 2D fragments into three dimensional (3D) fragments, creates volume spans for the pixel based on the 3D fragments, determines a color of a volume span based on color contribution of respective one or more of the 3D fragments for the volume span, and determines a color of the pixel for the output image from determined colors of the volume spans.
    Type: Grant
    Filed: May 16, 2016
    Date of Patent: June 9, 2020
    Assignee: Google LLC
    Inventors: Janne Kontkanen, Noah Snavely
  • Publication number: 20170332063
    Abstract: A system creates an output image of a scene using two-dimensional (2D) images of the scene. For a pixel in the output image, the system identifies, in the output image, 2D fragments that correspond to the pixel. The system converts the 2D fragments into three dimensional (3D) fragments, creates volume spans for the pixel based on the 3D fragments, determines a color of a volume span based on color contribution of respective one or more of the 3D fragments for the volume span, and determines a color of the pixel for the output image from determined colors of the volume spans.
    Type: Application
    Filed: May 16, 2016
    Publication date: November 16, 2017
    Inventors: Janne Kontkanen, Noah Snavely
  • Patent number: 9531952
    Abstract: Aspects of this disclosure relate to generating a composite image of an image of and another image that has a wider field of view. After an image is selected, the visual features in the image may be identified. Several images, such as panoramas, which have wider fields of view than an image captured by a camera, may be selected according to a comparison of the visual features in the image and the visual features of the larger images. The image may be aligned with each of the larger images, and at least one of these smaller-larger image pairs may be generated as a composite image.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: December 27, 2016
    Assignee: Google Inc.
    Inventors: Keith Noah Snavely, Pierre Georgel, Steven Maxwell Seitz
  • Publication number: 20160286122
    Abstract: Aspects of this disclosure relate to generating a composite image of an image of and another image that has a wider field of view. After an image is selected, the visual features in the image may be identified. Several images, such as panoramas, which have wider fields of view than an image captured by a camera, may be selected according to a comparison of the visual features in the image and the visual features of the larger images. The image may be aligned with each of the larger images, and at least one of these smaller-larger image pairs may be generated as a composite image.
    Type: Application
    Filed: March 27, 2015
    Publication date: September 29, 2016
    Inventors: Keith Noah Snavely, Pierre Georgel, Steven Maxwell Seitz
  • Patent number: 9418482
    Abstract: Aspects of the disclosure relate to identifying visited travel destinations from a set of digital images associated with users of a social networking system. For example, one or more computing devices provide access to an individual user's account, including the individual user and other users affiliated with the individual user via the social networking system. One or more digital images are received from a computing device associated with the individual user and from one or more second computing devices associated with the other users of the social networking system. From each digital image, a geo-location is determined for each digital image. The one or more computing devices display each geo-located image on a map at a position corresponding to the determined geo-location for the geo-located image.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: August 16, 2016
    Assignee: Google Inc.
    Inventors: Tsung-Lin Yang, Bryce Evans, Keith Noah Snavely, Yihui Xie, Andrew C. Gallagher
  • Patent number: 9324151
    Abstract: System and methods for determining where a digital photograph was taken by estimating the camera pose with respect to a global scale three-dimensional database. Accurate location and orientation of the digital photograph is established through feature correspondence and geometry estimated from photograph collections.
    Type: Grant
    Filed: December 8, 2012
    Date of Patent: April 26, 2016
    Assignee: Cornell University
    Inventors: Noah Snavely, Daniel Huttenlocher, Yungpeng Li
  • Publication number: 20140314322
    Abstract: System and methods for determining where a digital photograph was taken by estimating the camera pose with respect to a global scale three-dimensional database. Accurate location and orientation of the digital photograph is established through feature correspondence and geometry estimated from photograph collections.
    Type: Application
    Filed: December 8, 2012
    Publication date: October 23, 2014
    Applicant: Cornell University
    Inventors: Noah Snavely, Daniel Huttenlocher, Yungpeng Li
  • Patent number: 8744214
    Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.
    Type: Grant
    Filed: May 21, 2013
    Date of Patent: June 3, 2014
    Assignees: Microsoft Corporation, University of Washington
    Inventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
  • Publication number: 20130254666
    Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .scas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.
    Type: Application
    Filed: May 21, 2013
    Publication date: September 26, 2013
    Applicants: University of Washington, Microsoft Corporation
    Inventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
  • Patent number: 8515159
    Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.
    Type: Grant
    Filed: March 14, 2012
    Date of Patent: August 20, 2013
    Assignees: Microsoft Corporation, University of Washington
    Inventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
  • Patent number: 8463071
    Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: June 11, 2013
    Assignees: Microsoft Corporation, University of Washington
    Inventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
  • Publication number: 20120169734
    Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.
    Type: Application
    Filed: March 14, 2012
    Publication date: July 5, 2012
    Applicants: University of Washington, Microsoft Corporation
    Inventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski
  • Publication number: 20120169770
    Abstract: Over the past few years there has been a dramatic proliferation of digital cameras, and it has become increasingly easy to share large numbers of photographs with many other people. These trends have contributed to the availability of large databases of photographs. Effectively organizing, browsing, and visualizing such .seas. of images, as well as finding a particular image, can be difficult tasks. In this paper, we demonstrate that knowledge of where images were taken and where they were pointed makes it possible to visualize large sets of photographs in powerful, intuitive new ways. We present and evaluate a set of novel tools that use location and orientation information, derived semi-automatically using structure from motion, to enhance the experience of exploring such large collections of images.
    Type: Application
    Filed: March 13, 2012
    Publication date: July 5, 2012
    Applicants: University of Washington, Microsoft Corporation
    Inventors: Keith Noah Snavely, Steven Maxwell Seitz, Richard Szeliski