Patents by Inventor Steven Maxwell Seitz

Steven Maxwell Seitz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240005590
    Abstract: Techniques of image synthesis using a neural radiance field (NeRF) includes generating a deformation model of movement experienced by a subject in a non-rigidly deforming scene. For example, when an image synthesis system uses NeRFs, the system takes as input multiple poses of subjects for training data. In contrast to conventional NeRFs, the technical solution first expresses the positions of the subjects from various perspectives in an observation frame. The technical solution then involves deriving a deformation model, i.e., a mapping between the observation frame and a canonical frame in which the subject's movements are taken into account. This mapping is accomplished using latent deformation codes for each pose that are determined using a multilayer perceptron (MLP). A NeRF is then derived from positions and casted ray directions in the canonical frame using another MLP. New poses for the subject may then be derived using the NeRF.
    Type: Application
    Filed: January 14, 2021
    Publication date: January 4, 2024
    Inventors: Ricardo Martin Brualla, Keunhong Park, Utkarsh Sinha, Sofien Bouaziz, Daniel Goldman, Jonathan Tilton Barron, Steven Maxwell Seitz
  • Patent number: 10681336
    Abstract: Aspects of the disclosure relate generally to generating depth data from a video. As an example, one or more computing devices may receive an initialization request for a still image capture mode. After receiving the request to initialize the still image capture mode, the one or more computing devices may automatically begin to capture a video including a plurality of image frames. The one or more computing devices track features between a first image frame of the video and each of the other image frames of the video. Points corresponding to the tracked features may be generated by the one or more computing devices using a set of assumptions. The assumptions may include a first assumption that there is no rotation and a second assumption that there is no translation. The one or more computing devices then generate a depth map based at least in part on the points.
    Type: Grant
    Filed: May 15, 2014
    Date of Patent: June 9, 2020
    Assignee: Google LLC
    Inventors: David Gallup, Fu Yu, Steven Maxwell Seitz
  • Patent number: 10397472
    Abstract: Aspects of the disclosure relate to capturing panoramic images using a computing device. For example, the computing device may record a set of video frames and tracking features each including one or more features that appear in two or more video frames of the set of video frames within the set of video frames may be determined. A set of frame-based features based on the displacement of the tracking features between two or more video frames of the set of video frames may be determined by the computing device. A set of historical feature values based on the set of frame-based features may also be determined by the computing device. The computing device may determine then whether a user is attempting to capture a panoramic image based on the set of historical feature values. In response, the computing device may capture a panoramic image.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: August 27, 2019
    Assignee: Google LLC
    Inventors: Alexandros Andre Chaaraoui, Carlos Hernandez Esteban, Li Zhang, Steven Maxwell Seitz
  • Patent number: 10334165
    Abstract: Systems and methods for capturing omnistereo content for a mobile device may include receiving an indication to capture a plurality of images of a scene, capturing the plurality of images using a camera associated with a mobile device and displaying on a screen of the mobile device and during capture, a representation of the plurality of images and presenting a composite image that includes a target capture path and an indicator that provides alignment information corresponding to a source capture path associated with the mobile device during capture of the plurality of images. The system may detect that a portion of the source capture path does not match a target capture path. The system can provide an updated indicator in the screen that may include a prompt to a user of the mobile device to adjust the mobile device to align the source capture path with the target capture path.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: June 25, 2019
    Assignee: GOOGLE LLC
    Inventors: Robert Anderson, Steven Maxwell Seitz, Carlos Hernandez Esteban
  • Patent number: 10244226
    Abstract: Systems and methods are related to a camera rig and generating stereoscopic panoramas from captured images for display in a virtual reality (VR) environment.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: March 26, 2019
    Assignee: Google LLC
    Inventors: Joshua Weaver, Robert Anderson, Changchang Wu, Michael Krainin, David Gallup, Steven Maxwell Seitz, Carlos Hernandez Esteban, Matthew Thomas Valente, Christopher Edward Hoover, Erik Hubert Dolly Goossens
  • Patent number: 10038887
    Abstract: Systems and methods are described for defining a set of images based on captured images, receiving a viewing direction associated with a user of a virtual reality (VR) head mounted display, receiving an indication of a change in the viewing direction. The methods further include configuring, a re-projection of a portion of the set of images, the re-projection based at least in part on the changed viewing direction and a field of view associated with the captured images, and converting the portion from a spherical perspective projection into a planar perspective projection, rendering by the computing device and for display in the VR head mounted display, an updated view based on the re-projection, the updated view configured to correct distortion and provide stereo parallax in the portion, and providing, to the head mounted display, the updated view including a stereo panoramic scene corresponding to the changed viewing direction.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: July 31, 2018
    Assignee: GOOGLE LLC
    Inventors: David Gallup, Robert Anderson, Carlos Hernandez Esteban, Steven Maxwell Seitz, Riley Adams
  • Publication number: 20180183997
    Abstract: Aspects of the disclosure relate to capturing panoramic images using a computing device. For example, the computing device may record a set of video frames and tracking features each including one or more features that appear in two or more video frames of the set of video frames within the set of video frames may be determined. A set of frame-based features based on the displacement of the tracking features between two or more video frames of the set of video frames may be determined by the computing device. A set of historical feature values based on the set of frame-based features may also be determined by the computing device. The computing device may determine then whether a user is attempting to capture a panoramic image based on the set of historical feature values. In response, the computing device may capture a panoramic image.
    Type: Application
    Filed: February 21, 2018
    Publication date: June 28, 2018
    Inventors: Alexandros Andre Chaaraoui, Carlos Hernandez Esteban, Li Zhang, Steven Maxwell Seitz
  • Patent number: 9936128
    Abstract: Aspects of the disclosure relate to capturing panoramic images using a computing device. For example, the computing device may record a set of video frames and tracking features each including one or more features that appear in two or more video frames of the set of video frames within the set of video frames may be determined. A set of frame-based features based on the displacement of the tracking features between two or more video frames of the set of video frames may be determined by the computing device. A set of historical feature values based on the set of frame-based features may also be determined by the computing device. The computing device may determine then whether a user is attempting to capture a panoramic image based on the set of historical feature values. In response, the computing device may capture a panoramic image.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: April 3, 2018
    Assignee: Google LLC
    Inventors: Alexandros Andre Chaaraoui, Carlos Hernandez Esteban, Li Zhang, Steven Maxwell Seitz
  • Publication number: 20180048816
    Abstract: Systems and methods for capturing omnistereo content for a mobile device may include receiving an indication to capture a plurality of images of a scene, capturing the plurality of images using a camera associated with a mobile device and displaying on a screen of the mobile device and during capture, a representation of the plurality of images and presenting a composite image that includes a target capture path and an indicator that provides alignment information corresponding to a source capture path associated with the mobile device during capture of the plurality of images. The system may detect that a portion of the source capture path does not match a target capture path. The system can provide an updated indicator in the screen that may include a prompt to a user of the mobile device to adjust the mobile device to align the source capture path with the target capture path.
    Type: Application
    Filed: October 3, 2017
    Publication date: February 15, 2018
    Inventors: Robert Anderson, Steven Maxwell Seitz, Carlos Hernandez Esteban
  • Patent number: 9888215
    Abstract: An indoor scene capture system is provided that, with a handheld device with a camera, collects videos of rooms, spatially indexes the frames of the videos, marks doorways between rooms, and collects videos of transitions from room to room via doorways. The indoor scene capture system may assign a direction to at least some of the frames based on the angle of rotation as determined by an inertial sensor (e.g., gyroscope) of the handheld device. The indoor scene capture system marks doorways within the frames of the videos. For each doorway between rooms, the indoor scene capture system collects a video of transitioning through the doorway as the camera moves from the point within a room through the doorway to a point within the adjoining room.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: February 6, 2018
    Assignee: University of Washington
    Inventors: Aditya Sankar, Steven Maxwell Seitz
  • Patent number: 9813621
    Abstract: Systems and methods for capturing omnistereo content for a mobile device may include receiving an indication to capture a plurality of images of a scene, capturing the plurality of images using a camera associated with a mobile device and displaying on a screen of the mobile device and during capture, a representation of the plurality of images and presenting a composite image that includes a target capture path and an indicator that provides alignment information corresponding to a source capture path associated with the mobile device during capture of the plurality of images. The system may detect that a portion of the source capture path does not match a target capture path. The system can provide an updated indicator in the screen that may include a prompt to a user of the mobile device to adjust the mobile device to align the source capture path with the target capture path.
    Type: Grant
    Filed: May 26, 2015
    Date of Patent: November 7, 2017
    Assignee: Google LLC
    Inventors: Robert Anderson, Steven Maxwell Seitz, Carlos Hernandez Esteban
  • Patent number: 9743019
    Abstract: Methods, systems, and articles of manufacture for generating a panoramic image of a long scene, are disclosed. These include, fitting a plurality of planes to 3D points associated with input images of portions of the long scene, where one or more respective planes are fitted to each of a ground surface, a dominant surface, and at least one of one or more foreground objects and one or more background objects in the long scene, and where distances from the 3D points to the fitted planes are substantially minimized.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: August 22, 2017
    Assignee: Google Inc.
    Inventors: David Robert Gallup, Steven Maxwell Seitz, Maneesh Agrawala, Robert Evan Carroll
  • Patent number: 9721347
    Abstract: An exemplary method includes prompting a user to capture video data at a location. The location is associated with navigation directions for the user. Information representing visual orientation and positioning information associated with the captured video data is received by one or more computing devices, and a stored data model representing a 3D geometry depicting objects associated with the location is accessed. Between corresponding images from the captured video data and projections of the 3D geometry, one or more candidate change regions are detected. Each candidate change region indicates an area of visual difference between the captured video data and projections. When it is detected that a count of the one or more candidate change regions is below a threshold, the stored model data is updated with at least part of the captured video data based on the visual orientation and positioning information associated with the captured video data.
    Type: Grant
    Filed: October 27, 2015
    Date of Patent: August 1, 2017
    Assignee: Google Inc.
    Inventors: Andrew Lookingbill, Steven Maxwell Seitz
  • Patent number: 9671938
    Abstract: Systems and methods for navigating an imagery graph are provided. In some aspects, a first image is provided for display, where the first image corresponds to a first image node within an imagery graph, where the imagery graph comprises image nodes corresponding to images from a plurality of different imagery types, and where each image node in the imagery graph is associated with geospatial data. An indication of a selection of a predetermined region within the first image is received, where the predetermined region is associated with a position in the first image that corresponds to geospatial data associated a second image node within the imagery graph. A second image corresponding to the second image node is provided for display in response to the indication of the selection of the predetermined region.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: June 6, 2017
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Andrew Ofstad
  • Patent number: 9654761
    Abstract: Systems and methods for the generation of depth data for a scene using images captured by a camera-enabled mobile device are provided. According to a particular implementation of the present disclosure, a reference image can be captured of a scene with an image capture device, such as an image capture device integrated with a camera-enabled mobile device. A short video or sequence of images can then be captured from multiple different poses relative to the reference scene. The captured image and video can then be processed using computer vision techniques to produce an image with associated depth data, such as an RGBZ image.
    Type: Grant
    Filed: July 31, 2013
    Date of Patent: May 16, 2017
    Assignee: Google Inc.
    Inventors: Carlos Hernandez Esteban, Steven Maxwell Seitz, Sameer Agarwal, Simon Fuhrmann
  • Patent number: 9607243
    Abstract: Aspects of the disclosure relate to providing users with sequences of images of physical locations over time or time-lapses. In order to do so, a set of images of a physical location may be identified. From the set of images, a representative image may be selected. The set may then be filtered by comparing the other images in the set to the representative image. The images in the filtered set may then be aligned to the representative image. From this set, a time-lapsed sequence of images may be generated, and the amount of change in the time-lapsed sequence of images may be determined. At the request of a user device for a time-lapsed image representation of the specified physical location, the generated time-lapsed sequence of images may be provided.
    Type: Grant
    Filed: January 7, 2015
    Date of Patent: March 28, 2017
    Assignee: Google Inc.
    Inventors: Ricardo Martin Brualla, David Robert Gallup, Steven Maxwell Seitz
  • Patent number: 9600932
    Abstract: An exemplary method for navigating among photos includes determining, using one or more computing devices, visual characteristics of a person depicted in a first image associated with a first location. These visual characteristics of the person are detected in a second image associated with a second location. Using the one or more computing devices, a series of intermediate images are identified based on the first location and the second location. Each intermediate image is associated with a location. The series of intermediate images and the second image are provided. Images of an intermediate destination from the series of intermediate images are selected based on a density of images at the intermediate destination. A 3D reconstruction of the intermediate destination is then generated based on the selected images. Thereafter, a visual presentation of images traversing through the 3D reconstruction of the intermediate destination to the second image is prepared for display.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: March 21, 2017
    Assignee: Google Inc.
    Inventor: Steven Maxwell Seitz
  • Patent number: 9531952
    Abstract: Aspects of this disclosure relate to generating a composite image of an image of and another image that has a wider field of view. After an image is selected, the visual features in the image may be identified. Several images, such as panoramas, which have wider fields of view than an image captured by a camera, may be selected according to a comparison of the visual features in the image and the visual features of the larger images. The image may be aligned with each of the larger images, and at least one of these smaller-larger image pairs may be generated as a composite image.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: December 27, 2016
    Assignee: Google Inc.
    Inventors: Keith Noah Snavely, Pierre Georgel, Steven Maxwell Seitz
  • Publication number: 20160352982
    Abstract: Systems and methods are related to a camera rig and generating stereoscopic panoramas from captured images for display in a virtual reality (VR) environment.
    Type: Application
    Filed: May 27, 2016
    Publication date: December 1, 2016
    Inventors: Joshua Weaver, Robert Anderson, Changchang Wu, Michael Krainin, David Gallup, Steven Maxwell Seitz, Carlos Hernandez Esteban, Matthew Thomas Valente, Christopher Edward Hoover, Erik Hubert Dolly Goossens
  • Publication number: 20160353089
    Abstract: Systems and methods are described for defining a set of images based on captured images, receiving a viewing direction associated with a user of a virtual reality (VR) head mounted display, receiving an indication of a change in the viewing direction. The methods further include configuring, a re-projection of a portion of the set of images, the re-projection based at least in part on the changed viewing direction and a field of view associated with the captured images, and converting the portion from a spherical perspective projection into a planar perspective projection, rendering by the computing device and for display in the VR head mounted display, an updated view based on the re-projection, the updated view configured to correct distortion and provide stereo parallax in the portion, and providing, to the head mounted display, the updated view including a stereo panoramic scene corresponding to the changed viewing direction.
    Type: Application
    Filed: May 27, 2015
    Publication date: December 1, 2016
    Inventors: David Gallup, Robert Anderson, Carlos Hernandez Esteban, Steven Maxwell Seitz, Riley Adams