Patents by Inventor Scott Edward Dillard

Scott Edward Dillard has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10217283
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. For instance, a first image of a multidimensional space is provided with an overlay line indicating a direction in which the space extends into the first image such that a second image is connected to the first image along a direction of the overlay line. User input indicating a swipe across a portion of the display is received. When swipe occurred at least partially within an interaction zone defining an area around the overlay line at which the user can interact with the space, the swipe indicates a request to display an image different from the first image. The second image is selected and provided for display based on the swipe and a connection graph connecting the first image and the second image along the direction of the overlay line.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Scott Edward Dillard, Humberto Castaneda, Su Chuin Leong, Michael Cameron Jones, Christopher Gray, Evan Hardesty Parker
  • Patent number: 9898857
    Abstract: In one aspect, computing device(s) may determine a plurality of fragments for a three-dimensional (3D) model of a geographical location. Each fragment of the plurality of fragments may correspond to a pixel of a blended image and each fragment has a fragment color from the 3D model. The one or more computing devices may determine geospatial location data for each fragment based at least in part on latitude information, longitude information, and altitude information associated with the 3D model. For each fragment of the plurality of fragments, the one or more computing devices may identify a pixel color and an image based at least in part on the geospatial location data, determine a blending ratio based on at least one of a position and an orientation of a virtual camera, and generate the blended image based on at least the blending ratio, the pixel color, and the fragment color.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: February 20, 2018
    Assignee: Google LLC
    Inventors: Scott Edward Dillard, Evan Hardesty Parker, Michael Cameron Jones
  • Publication number: 20170178404
    Abstract: Aspects of the disclosure relate generally to providing a user with an image navigation experience. For instance, a first image of a multidimensional space is provided with an overlay line indicating a direction in which the space extends into the first image such that a second image is connected to the first image along a direction of the overlay line. User input indicating a swipe across a portion of the display is received. When swipe occurred at least partially within an interaction zone defining an area around the overlay line at which the user can interact with the space, the swipe indicates a request to display an image different from the first image. The second image is selected and provided for display based on the swipe and a connection graph connecting the first image and the second image along the direction of the overlay line.
    Type: Application
    Filed: December 17, 2015
    Publication date: June 22, 2017
    Inventors: Scott Edward Dillard, Humberto Castaneda, Su Chuin Leong, Michael Cameron Jones, Christopher Gray, Evan Hardesty Parker
  • Publication number: 20160321837
    Abstract: In one aspect, computing device(s) may determine a plurality of fragments for a three-dimensional (3D) model of a geographical location. Each fragment of the plurality of fragments may correspond to a pixel of a blended image and each fragment has a fragment color from the 3D model. The one or more computing devices may determine geospatial location data for each fragment based at least in part on latitude information, longitude information, and altitude information associated with the 3D model. For each fragment of the plurality of fragments, the one or more computing devices may identify a pixel color and an image based at least in part on the geospatial location data, determine a blending ratio based on at least one of a position and an orientation of a virtual camera, and generate the blended image based on at least the blending ratio, the pixel color, and the fragment color.
    Type: Application
    Filed: July 7, 2016
    Publication date: November 3, 2016
    Inventors: Scott Edward Dillard, Evan Hardesty Parker, Michael Cameron Jones
  • Patent number: 9418472
    Abstract: In one aspect, computing device(s) may determine a plurality of fragments for a three-dimensional (3D) model of a geographical location. Each fragment of the plurality of fragments may correspond to a pixel of a blended image and each fragment has a fragment color from the 3D model. The one or more computing devices may determine geospatial location data for each fragment based at least in part on latitude information, longitude information, and altitude information associated with the 3D model. For each fragment of the plurality of fragments, the one or more computing devices may identify a pixel color and an image based at least in part on the geospatial location data, determine a blending ratio based on at least one of a position and an orientation of a virtual camera, and generate the blended image based on at least the blending ratio, the pixel color, and the fragment color.
    Type: Grant
    Filed: July 17, 2014
    Date of Patent: August 16, 2016
    Assignee: Google Inc.
    Inventors: Scott Edward Dillard, Evan Hardesty Parker, Michael Cameron Jones
  • Publication number: 20160019713
    Abstract: In one aspect, computing device(s) may determine a plurality of fragments for a three-dimensional (3D) model of a geographical location. Each fragment of the plurality of fragments may correspond to a pixel of a blended image and each fragment has a fragment color from the 3D model. The one or more computing devices may determine geospatial location data for each fragment based at least in part on latitude information, longitude information, and altitude information associated with the 3D model. For each fragment of the plurality of fragments, the one or more computing devices may identify a pixel color and an image based at least in part on the geospatial location data, determine a blending ratio based on at least one of a position and an orientation of a virtual camera, and generate the blended image based on at least the blending ratio, the pixel color, and the fragment color.
    Type: Application
    Filed: July 17, 2014
    Publication date: January 21, 2016
    Inventors: Scott Edward Dillard, Evan Hardesty Parker, Michael Cameron Jones