Patents by Inventor Steven Maxwell Seitz

Steven Maxwell Seitz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150253859
    Abstract: A system and method is provided of detecting user manipulation of an inanimate object and interpreting that manipulation as input. In one aspect, the manipulation may be detected by an image capturing component of a computing device, and the manipulation is interpreted as an instruction to execute a command, such as opening up a drawing application in response to a user picking up a pen. The manipulation may also be detected with the aid of an audio capturing device, e.g., a microphone on the computing device.
    Type: Application
    Filed: March 5, 2014
    Publication date: September 10, 2015
    Applicant: GOOGLE INC.
    Inventors: Jonah Jones, Steven Maxwell Seitz
  • Patent number: 9118905
    Abstract: Methods, systems, and articles of manufacture for generating a panoramic image of a long scene are disclosed. These include, fitting planes to 3D points associated with input images of portions of the long scene, where respective planes are fitted to a ground surface, a dominant surface, and at least one of foreground objects and background objects, and where distances from the 3D points to the fitted planes are minimized. These also include, selecting, for respective pixels in the panoramic image, an input image and a fitted plane such that a distance is minimized from the selected the fitted plane to a surface corresponding to the pixels and occlusion of the pixels is reduced in the selected input image, and stitching by projecting the selected input image using the selected fitted plane into the virtual camera.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: David Gallup, Steven Maxwell Seitz, Maneesh Agrawala, Robert Evan Carroll
  • Patent number: 9087405
    Abstract: In one aspect, one or more computing devices receive a set of image frames. Each image frame includes pixels. The computing devices align image frames in order to identify flows of the pixels in the set of image frames. Regions of bokeh effect are identified in each image frame by measuring the sizes of areas of expansion across image frames using a set of assumptions and the identified flows. The computing devices adjust the alignment of the set of image frames based at least in part on the identified regions of bokeh effect. For each image frame, the computing devices generates an index map of focus values for each of the pixels that image frame using the improved alignment. A depth map is generated by the computing devices based at least in part on the index maps.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: July 21, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Supasorn Suwajanakorn
  • Publication number: 20150178926
    Abstract: A system and method is provided for determining whether images of a geographic location identify features with characteristics consistent with shadows cast by people, and using such determination to annotate map information. If such features are identified at the location, the map may be annotated to indicate that the location is frequented by pedestrians.
    Type: Application
    Filed: December 19, 2013
    Publication date: June 25, 2015
    Applicant: Google Inc.
    Inventors: Jonah Jones, Steven Maxwell Seitz
  • Publication number: 20150170400
    Abstract: In one aspect, one or more computing devices receive a set of image frames. Each image frame includes pixels. The computing devices align image frames in order to identify flows of the pixels in the set of image frames. Regions of bokeh effect are identified in each image frame by measuring the sizes of areas of expansion across image frames using a set of assumptions and the identified flows. The computing devices adjust the alignment of the set of image frames based at least in part on the identified regions of bokeh effect. For each image frame, the computing devices generates an index map of focus values for each of the pixels that image frame using the improved alignment. A depth map is generated by the computing devices based at least in part on the index maps.
    Type: Application
    Filed: December 16, 2013
    Publication date: June 18, 2015
    Applicant: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Supasorn Suwajanakorn
  • Publication number: 20150154793
    Abstract: Systems and methods for browsing images of points of interest (POIs) are provided. Indication of selection of POI from among multiple POIs is received. Image graph associated with POI is identified. Image graph includes multiple images of POI. Tour path for POI is defined within image graph. Specific image from tour path defined within image graph is provided for display. Previous image and next image relative to specific image along tour path defined within image graph are determined. First set of additional images from image graph based on specific image is determined. First set of additional images corresponds to set of images in image graph proximate to specific image. Link to previous image or next image, and link to each member of first set of additional images for display with specific image are provided.
    Type: Application
    Filed: July 16, 2013
    Publication date: June 4, 2015
    Inventors: Andrew Ofstad, Steven Maxwell Seitz
  • Publication number: 20150156415
    Abstract: Methods, systems, and articles of manufacture for generating a panoramic image of a long scene are disclosed. These include, fitting planes to 3D points associated with input images of portions of the long scene, where respective planes are fitted to a ground surface, a dominant surface, and at least one of foreground objects and background objects, and where distances from the 3D points to the fitted planes are minimized. These also include, selecting, for respective pixels in the panoramic image, an input image and a fitted plane such that a distance is minimized from the selected the fitted plane to a surface corresponding to the pixels and occlusion of the pixels is reduced in the selected input image, and stitching by projecting the selected input image using the selected fitted plane into the virtual camera.
    Type: Application
    Filed: July 6, 2012
    Publication date: June 4, 2015
    Applicant: Google Inc.
    Inventors: David GALLUP, Steven Maxwell SEITZ, Maneesh AGRAWALA, Robert Evan CARROLL
  • Publication number: 20150154798
    Abstract: A computer-implemented method, system and computer-readable medium for transitioning between images in a three-dimensional space are provided. A three-dimensional (3D) model of a location that includes multiple two-dimensional images textured onto the 3D model and each image is associated with a set of credentials is provided. A virtual path is generated within the 3D model, the virtual path including at least some of the plurality of images in the sequential order. For each image in the virtual path, a type of a transition within each image and between the adjacent images in the sequential order is determined. The type of transition is based on the set of credentials associated with each image. As the virtual path is traversed using a virtual camera, where the virtual camera activates transitions associated with each image and the transitions ensure that the movement within the 3D model appears constant to the user.
    Type: Application
    Filed: April 27, 2012
    Publication date: June 4, 2015
    Applicant: Google Inc.
    Inventors: Matthew Robert SIMPSON, Jonah Jones, Yasutaka Furukawa, Steven Maxwell Seitz, Andrew Ofstad
  • Patent number: 9047692
    Abstract: Systems, methods, and computer storage mediums are provided for creating a scene scan from a group of photographic images. An exemplary method includes determining a set of common features for at least one pair of photographic images. The features include a portion of an object captured in each of a first and a second photographic image included in the pair. The first and second photographic images may be captured from different optical centers. A similarity transform for the at least one pair of photographic images is the determined. The similarity transform is provided in order to render the scene scan from each pair of photographic images. At least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position each pair of photographic images such that the set of common features between a pair of, at least in part, align.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: June 2, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Rahul Garg
  • Patent number: 9046996
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. A drag vector for user input relative to the reference image is determined. For particular image of the set of target images an associated cost is determined based at least in part on a cost function and the drag vector. A target image is selected based on the determined associated costs.
    Type: Grant
    Filed: October 17, 2013
    Date of Patent: June 2, 2015
    Assignee: Google Inc.
    Inventors: David Gallup, Liyong Chen, Shuchang Zhou, Steven Maxwell Seitz
  • Patent number: 9036921
    Abstract: Systems, methods and articles of manufacture for generating sequences of face and expression aligned images are presented. An embodiment includes determining a plurality of candidate images, computing a similarity distance between an input image and each of the candidate images based on facial features in the input image and the candidate images, comparing the computed similarity distances, selecting a candidate image based on the comparing, and adding the selected candidate image to an image sequence for real-time display. Embodiments select images from the image sequence as they are being added to the image sequence and scale, rotate and translate each image so that a face appearing in a selected image is aligned with a face appearing in a subsequently selected image from the image sequence. In this way, embodiments are able to render arbitrarily large image collections efficiently and in real time to display a face and expression aligned movie.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: May 19, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Rahul Garg, Irena Kemelmaher
  • Patent number: 9019279
    Abstract: System and method for rendering a sequence of orthographic approximation images corresponding to camera poses to generate an animation moving between an initial view and a final view of a target area are provided. An initial image corresponding to an initial camera pose directed at the target area is identified. A final image and an associated depthmap corresponding to a final camera pose directed at the target area is further identified. A plurality of intermediate images corresponding to a plurality of camera poses directed at the target area is produced by performing interpolation on the initial image, the final image, and the associated depthmap. Each intermediate image is associated with a point along a navigational path between the initial camera pose and the final camera pose. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: Jeffrey Thomas Prouty, Steven Maxwell Seitz, Carlos Hernandez Esteban, Matthew Robert Simpson
  • Publication number: 20150113474
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. A drag vector for user input relative to the reference image is determined. For particular image of the set of target images an associated cost is determined based at least in part on a cost function and the drag vector. A target image is selected based on the determined associated costs.
    Type: Application
    Filed: October 17, 2013
    Publication date: April 23, 2015
    Applicant: GOOGLE INC.
    Inventors: David Gallup, Liyong Chen, Shuchang Zhou, Steven Maxwell Seitz
  • Publication number: 20150109416
    Abstract: Aspects of the disclosure relate generally to generating depth data from a video. As an example, one or more computing devices may receive an initialization request for a still image capture mode. After receiving the request to initialize the still image capture mode, the one or more computing devices may automatically begin to capture a video including a plurality of image frames. The one or more computing devices track features between a first image frame of the video and each of the other image frames of the video. Points corresponding to the tracked features may be generated by the one or more computing devices using a set of assumptions. The assumptions may include a first assumption that there is no rotation and a second assumption that there is no translation. The one or more computing devices then generate a depth map based at least in part on the points.
    Type: Application
    Filed: May 15, 2014
    Publication date: April 23, 2015
    Applicant: GOOGLE INC.
    Inventors: David Gallup, Fu Yu, Steven Maxwell Seitz
  • Publication number: 20150109328
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. An area of the reference image is identified. For each particular image of the set of potential target images an associated cost for the identified area is determined based at least in part on a cost function for transitioning between the reference image and the particular target image. A target image is selected for association with the identified area based on the determined associated cost functions.
    Type: Application
    Filed: October 17, 2013
    Publication date: April 23, 2015
    Applicant: GOOGLE INC.
    Inventors: David Gallup, Steven Maxwell Seitz
  • Publication number: 20150095831
    Abstract: Systems and methods for navigating an imagery graph are provided. In some aspects, a first image is provided for display, where the first image corresponds to a first image node within an imagery graph, where the imagery graph comprises image nodes corresponding to images from a plurality of different imagery types, and where each image node in the imagery graph is associated with geospatial data. An indication of a selection of a predetermined region within the first image is received, where the predetermined region is associated with a position in the first image that corresponds to geospatial data associated a second image node within the imagery graph. A second image corresponding to the second image node is provided for display in response to the indication of the selection of the predetermined region.
    Type: Application
    Filed: December 9, 2014
    Publication date: April 2, 2015
    Inventors: Steven Maxwell Seitz, Andrew Ofstad
  • Patent number: 8994725
    Abstract: System and methods for generating a model of an environment are provided. In some aspects, a system includes a layer module configured to identify one or more layers of the environment based on a plurality of three-dimensional (3D) points mapping the environment. The system also includes a layout module configured to generate a layout for each layer. Each layout includes a two-dimensional (2D) model of the environment. The system also includes a construction module configured to generate a 3D model of the environment based on the 2D model of each layout.
    Type: Grant
    Filed: March 7, 2012
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Yasutaka Furukawa, Steven Maxwell Seitz, Jianxiong Xiao, Carlos Hernandez Esteban, David Robert Gallup
  • Patent number: 8994738
    Abstract: System and method for rendering a sequence of images corresponding to a sequence of camera poses of a target area to generate an animation representative of a progression of camera poses are provided. An initial image and an associated initial depthmap of a target area captured from an initial camera pose, and a final image and an associated final depthmap of the target area captured from a final camera pose are identified. A plurality of intermediate images representing a plurality of intermediate camera poses directed at the target are produced by performing interpolation on the initial image, the initial depthmap, the final image and the final depthmap. Each intermediate image is associated with a point along the navigational path between the initial and the final camera poses. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: March 31, 2015
    Assignee: Google Inc.
    Inventors: Carlos Hernandez Esteban, Steven Maxwell Seitz, Matthew Robert Simpson
  • Patent number: 8928666
    Abstract: Systems and methods for navigating an imagery graph are provided. In some aspects, a first image is provided for display, where the first image corresponds to a first image node within an imagery graph, where the imagery graph comprises image nodes corresponding to images from a plurality of different imagery types, and where each image node in the imagery graph is associated with geospatial data. An indication of a selection of a predetermined region within the first image is received, where the predetermined region is associated with a position in the first image that corresponds to geospatial data associated a second image node within the imagery graph. A second image corresponding to the second image node is provided for display in response to the indication of the selection of the predetermined region.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: January 6, 2015
    Assignee: Google Inc.
    Inventors: Andrew Ofstad, Steven Maxwell Seitz
  • Patent number: 8880535
    Abstract: A system and machine-implemented method for providing one or more photos associated with a point of interest on a map, the method including receiving an indication of a request from a user to view photos associated with a point of interest on a map, identifying a set of photos associated with the point of interest, wherein the photos comprise at least one of photos taken from the point of interest or photos that depict at least part of the point of interest, ranking the photos within the set of photos according to ranking criteria, wherein the ranking criteria comprises one or more of map context, photo quality, photo type or user request information and providing one or more photos of the set of photos to the user according to the ranking.
    Type: Grant
    Filed: November 29, 2011
    Date of Patent: November 4, 2014
    Assignee: Google Inc.
    Inventors: Sameer Agarwal, Steven Maxwell Seitz, David Robert Gallup