Patents by Inventor Steven Maxwell

Steven Maxwell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9224206
    Abstract: An exemplary method includes prompting a user to capture video data at a location. The location is associated with navigation directions for the user. Information representing visual orientation and positioning information associated with the captured video data is received by one or more computing devices, and a stored data model representing a 3D geometry depicting objects associated with the location is accessed. Between corresponding images from the captured video data and projections of the 3D geometry, one or more candidate change regions are detected. Each candidate change region indicates an area of visual difference between the captured video data and projections. When it is detected that a count of the one or more candidate change regions is below a threshold, the stored model data is updated with at least part of the captured video data based on the visual orientation and positioning information associated with the captured video data.
    Type: Grant
    Filed: July 24, 2014
    Date of Patent: December 29, 2015
    Assignee: Google Inc.
    Inventors: Andrew Lookingbill, Steven Maxwell Seitz
  • Patent number: 9202307
    Abstract: Systems and methods for browsing images of points of interest (POIs) are provided. Indication of selection of POI from among multiple POIs is received. Image graph associated with POI is identified. Image graph includes multiple images of POI. Tour path for POI is defined within image graph. Specific image from tour path defined within image graph is provided for display. Previous image and next image relative to specific image along tour path defined within image graph are determined. First set of additional images from image graph based on specific image is determined. First set of additional images corresponds to set of images in image graph proximate to specific image. Link to previous image or next image, and link to each member of first set of additional images for display with specific image are provided.
    Type: Grant
    Filed: July 16, 2013
    Date of Patent: December 1, 2015
    Assignee: Google Inc.
    Inventors: Andrew Ofstad, Steven Maxwell Seitz
  • Publication number: 20150332494
    Abstract: Systems and methods for generating image tour are provided. Method includes receiving sequence of images. Each image has associated depth map and is characterized by plurality of parameters. Method includes interpolating flow path containing points corresponding to each image for each parameter. Each flow path relates parameterization to parameter. Method includes identifying rendering artifacts based on sequence of images, associated depth maps, and interpolated flow paths. Method includes identifying slow segments and fast segments in each of interpolated flow paths based on identified rendering artifacts. Method includes determining start/stop times for each of slow segments. Method includes interpolating time curve based on determined start/stop times for each of slow segments and fixed duration of time for each of fast segments. Time curve relates time to parameterization.
    Type: Application
    Filed: August 23, 2012
    Publication date: November 19, 2015
    Applicant: GOOGLE INC.
    Inventors: Yasutaka FURUKAWA, Carlos Hernandez Esteban, Steven Maxwell Seitz
  • Patent number: 9142021
    Abstract: Systems and methods for aligning ground based images of a geographic area taken from a perspective at or near ground level and a set of aerial images taken from, for instance, an oblique perspective, are provided. More specifically, candidate aerial imagery can be identified for alignment with the ground based image. Geometric data associated with the ground based image can be obtained and used to warp the ground based image to a perspective associated with the candidate aerial imagery. One or more feature matches between the warped image and the candidate aerial imagery can then be identified using a feature matching technique. The matched features can be used to align the ground based image with the candidate aerial imagery.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: September 22, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Qi Shan
  • Publication number: 20150254042
    Abstract: An exemplary method for navigating among photos includes determining, using one or more computing devices, visual characteristics of a person depicted in a first image associated with a first location. These visual characteristics of the person are detected in a second image associated with a second location. Using the one or more computing devices, a series of intermediate images are identified based on the first location and the second location. Each intermediate image is associated with a location. The series of intermediate images and the second image are provided. Images of an intermediate destination from the series of intermediate images are selected based on a density of images at the intermediate destination. A 3D reconstruction of the intermediate destination is then generated based on the selected images. Thereafter, a visual presentation of images traversing through the 3D reconstruction of the intermediate destination to the second image is prepared for display.
    Type: Application
    Filed: March 10, 2014
    Publication date: September 10, 2015
    Applicant: GOOGLE INC.
    Inventor: Steven Maxwell Seitz
  • Publication number: 20150253859
    Abstract: A system and method is provided of detecting user manipulation of an inanimate object and interpreting that manipulation as input. In one aspect, the manipulation may be detected by an image capturing component of a computing device, and the manipulation is interpreted as an instruction to execute a command, such as opening up a drawing application in response to a user picking up a pen. The manipulation may also be detected with the aid of an audio capturing device, e.g., a microphone on the computing device.
    Type: Application
    Filed: March 5, 2014
    Publication date: September 10, 2015
    Applicant: GOOGLE INC.
    Inventors: Jonah Jones, Steven Maxwell Seitz
  • Patent number: 9118905
    Abstract: Methods, systems, and articles of manufacture for generating a panoramic image of a long scene are disclosed. These include, fitting planes to 3D points associated with input images of portions of the long scene, where respective planes are fitted to a ground surface, a dominant surface, and at least one of foreground objects and background objects, and where distances from the 3D points to the fitted planes are minimized. These also include, selecting, for respective pixels in the panoramic image, an input image and a fitted plane such that a distance is minimized from the selected the fitted plane to a surface corresponding to the pixels and occlusion of the pixels is reduced in the selected input image, and stitching by projecting the selected input image using the selected fitted plane into the virtual camera.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: David Gallup, Steven Maxwell Seitz, Maneesh Agrawala, Robert Evan Carroll
  • Patent number: 9087405
    Abstract: In one aspect, one or more computing devices receive a set of image frames. Each image frame includes pixels. The computing devices align image frames in order to identify flows of the pixels in the set of image frames. Regions of bokeh effect are identified in each image frame by measuring the sizes of areas of expansion across image frames using a set of assumptions and the identified flows. The computing devices adjust the alignment of the set of image frames based at least in part on the identified regions of bokeh effect. For each image frame, the computing devices generates an index map of focus values for each of the pixels that image frame using the improved alignment. A depth map is generated by the computing devices based at least in part on the index maps.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: July 21, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Supasorn Suwajanakorn
  • Publication number: 20150178926
    Abstract: A system and method is provided for determining whether images of a geographic location identify features with characteristics consistent with shadows cast by people, and using such determination to annotate map information. If such features are identified at the location, the map may be annotated to indicate that the location is frequented by pedestrians.
    Type: Application
    Filed: December 19, 2013
    Publication date: June 25, 2015
    Applicant: Google Inc.
    Inventors: Jonah Jones, Steven Maxwell Seitz
  • Publication number: 20150170400
    Abstract: In one aspect, one or more computing devices receive a set of image frames. Each image frame includes pixels. The computing devices align image frames in order to identify flows of the pixels in the set of image frames. Regions of bokeh effect are identified in each image frame by measuring the sizes of areas of expansion across image frames using a set of assumptions and the identified flows. The computing devices adjust the alignment of the set of image frames based at least in part on the identified regions of bokeh effect. For each image frame, the computing devices generates an index map of focus values for each of the pixels that image frame using the improved alignment. A depth map is generated by the computing devices based at least in part on the index maps.
    Type: Application
    Filed: December 16, 2013
    Publication date: June 18, 2015
    Applicant: Google Inc.
    Inventors: Steven Maxwell Seitz, Carlos Hernandez Esteban, Supasorn Suwajanakorn
  • Publication number: 20150156415
    Abstract: Methods, systems, and articles of manufacture for generating a panoramic image of a long scene are disclosed. These include, fitting planes to 3D points associated with input images of portions of the long scene, where respective planes are fitted to a ground surface, a dominant surface, and at least one of foreground objects and background objects, and where distances from the 3D points to the fitted planes are minimized. These also include, selecting, for respective pixels in the panoramic image, an input image and a fitted plane such that a distance is minimized from the selected the fitted plane to a surface corresponding to the pixels and occlusion of the pixels is reduced in the selected input image, and stitching by projecting the selected input image using the selected fitted plane into the virtual camera.
    Type: Application
    Filed: July 6, 2012
    Publication date: June 4, 2015
    Applicant: Google Inc.
    Inventors: David GALLUP, Steven Maxwell SEITZ, Maneesh AGRAWALA, Robert Evan CARROLL
  • Publication number: 20150154793
    Abstract: Systems and methods for browsing images of points of interest (POIs) are provided. Indication of selection of POI from among multiple POIs is received. Image graph associated with POI is identified. Image graph includes multiple images of POI. Tour path for POI is defined within image graph. Specific image from tour path defined within image graph is provided for display. Previous image and next image relative to specific image along tour path defined within image graph are determined. First set of additional images from image graph based on specific image is determined. First set of additional images corresponds to set of images in image graph proximate to specific image. Link to previous image or next image, and link to each member of first set of additional images for display with specific image are provided.
    Type: Application
    Filed: July 16, 2013
    Publication date: June 4, 2015
    Inventors: Andrew Ofstad, Steven Maxwell Seitz
  • Publication number: 20150154798
    Abstract: A computer-implemented method, system and computer-readable medium for transitioning between images in a three-dimensional space are provided. A three-dimensional (3D) model of a location that includes multiple two-dimensional images textured onto the 3D model and each image is associated with a set of credentials is provided. A virtual path is generated within the 3D model, the virtual path including at least some of the plurality of images in the sequential order. For each image in the virtual path, a type of a transition within each image and between the adjacent images in the sequential order is determined. The type of transition is based on the set of credentials associated with each image. As the virtual path is traversed using a virtual camera, where the virtual camera activates transitions associated with each image and the transitions ensure that the movement within the 3D model appears constant to the user.
    Type: Application
    Filed: April 27, 2012
    Publication date: June 4, 2015
    Applicant: Google Inc.
    Inventors: Matthew Robert SIMPSON, Jonah Jones, Yasutaka Furukawa, Steven Maxwell Seitz, Andrew Ofstad
  • Patent number: 9046996
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. A drag vector for user input relative to the reference image is determined. For particular image of the set of target images an associated cost is determined based at least in part on a cost function and the drag vector. A target image is selected based on the determined associated costs.
    Type: Grant
    Filed: October 17, 2013
    Date of Patent: June 2, 2015
    Assignee: Google Inc.
    Inventors: David Gallup, Liyong Chen, Shuchang Zhou, Steven Maxwell Seitz
  • Patent number: 9047692
    Abstract: Systems, methods, and computer storage mediums are provided for creating a scene scan from a group of photographic images. An exemplary method includes determining a set of common features for at least one pair of photographic images. The features include a portion of an object captured in each of a first and a second photographic image included in the pair. The first and second photographic images may be captured from different optical centers. A similarity transform for the at least one pair of photographic images is the determined. The similarity transform is provided in order to render the scene scan from each pair of photographic images. At least one of the rotation factor, the scaling factor, or the translation factor associated with the similarity transform is used to position each pair of photographic images such that the set of common features between a pair of, at least in part, align.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: June 2, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Rahul Garg
  • Patent number: 9036921
    Abstract: Systems, methods and articles of manufacture for generating sequences of face and expression aligned images are presented. An embodiment includes determining a plurality of candidate images, computing a similarity distance between an input image and each of the candidate images based on facial features in the input image and the candidate images, comparing the computed similarity distances, selecting a candidate image based on the comparing, and adding the selected candidate image to an image sequence for real-time display. Embodiments select images from the image sequence as they are being added to the image sequence and scale, rotate and translate each image so that a face appearing in a selected image is aligned with a face appearing in a subsequently selected image from the image sequence. In this way, embodiments are able to render arbitrarily large image collections efficiently and in real time to display a face and expression aligned movie.
    Type: Grant
    Filed: February 25, 2014
    Date of Patent: May 19, 2015
    Assignee: Google Inc.
    Inventors: Steven Maxwell Seitz, Rahul Garg, Irena Kemelmaher
  • Patent number: 9019279
    Abstract: System and method for rendering a sequence of orthographic approximation images corresponding to camera poses to generate an animation moving between an initial view and a final view of a target area are provided. An initial image corresponding to an initial camera pose directed at the target area is identified. A final image and an associated depthmap corresponding to a final camera pose directed at the target area is further identified. A plurality of intermediate images corresponding to a plurality of camera poses directed at the target area is produced by performing interpolation on the initial image, the final image, and the associated depthmap. Each intermediate image is associated with a point along a navigational path between the initial camera pose and the final camera pose. An animation of the plurality of intermediate images produces a transition of views between the initial camera pose and the final camera pose.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: April 28, 2015
    Assignee: Google Inc.
    Inventors: Jeffrey Thomas Prouty, Steven Maxwell Seitz, Carlos Hernandez Esteban, Matthew Robert Simpson
  • Publication number: 20150113474
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. A drag vector for user input relative to the reference image is determined. For particular image of the set of target images an associated cost is determined based at least in part on a cost function and the drag vector. A target image is selected based on the determined associated costs.
    Type: Application
    Filed: October 17, 2013
    Publication date: April 23, 2015
    Applicant: GOOGLE INC.
    Inventors: David Gallup, Liyong Chen, Shuchang Zhou, Steven Maxwell Seitz
  • Publication number: 20150109416
    Abstract: Aspects of the disclosure relate generally to generating depth data from a video. As an example, one or more computing devices may receive an initialization request for a still image capture mode. After receiving the request to initialize the still image capture mode, the one or more computing devices may automatically begin to capture a video including a plurality of image frames. The one or more computing devices track features between a first image frame of the video and each of the other image frames of the video. Points corresponding to the tracked features may be generated by the one or more computing devices using a set of assumptions. The assumptions may include a first assumption that there is no rotation and a second assumption that there is no translation. The one or more computing devices then generate a depth map based at least in part on the points.
    Type: Application
    Filed: May 15, 2014
    Publication date: April 23, 2015
    Applicant: GOOGLE INC.
    Inventors: David Gallup, Fu Yu, Steven Maxwell Seitz
  • Publication number: 20150109328
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. An area of the reference image is identified. For each particular image of the set of potential target images an associated cost for the identified area is determined based at least in part on a cost function for transitioning between the reference image and the particular target image. A target image is selected for association with the identified area based on the determined associated cost functions.
    Type: Application
    Filed: October 17, 2013
    Publication date: April 23, 2015
    Applicant: GOOGLE INC.
    Inventors: David Gallup, Steven Maxwell Seitz