Patents by Inventor Daniel Joseph Filip

Daniel Joseph Filip has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10163263
    Abstract: The technology uses image content to facilitate navigation in panoramic image data. Aspects include providing a first image including a plurality of avatars, in which each avatar corresponds to an object within the first image, and determining an orientation of at least one of the plurality of avatars to a point of interest within the first image. A viewport is determined for a first avatar in accordance with the orientation thereof relative to the point of interest, which is included within the first avatar's viewport. In response to received user input, a second image is selected that includes at least a second avatar and the point of interest from the first image. A viewport of the second avatar in the second image is determined and the second image is oriented to align the second avatar's viewpoint with the point of interest to provide navigation between the first and second images.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: December 25, 2018
    Assignee: Google LLC
    Inventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
  • Publication number: 20180367730
    Abstract: This technology relates to optimizing location and orientation information of an image using known locations of places captured within the image. For example, an image and associated pose data including the image's orientation and location may be received. One or more places captured within the image may be determined, with each place having a respective known location. The image may be annotated with the one or more places. A difference between each annotation and its respective known location to obtain updated pose data of the image may be minimized and the associated pose data may be updated to the updated pose data.
    Type: Application
    Filed: June 14, 2017
    Publication date: December 20, 2018
    Applicant: Google Inc.
    Inventors: Tianqiang Liu, Meng Yi, Xin Mao, Jacqueline Anne Lai, Daniel Joseph Filip, Stephen Charles Hsu
  • Publication number: 20180172839
    Abstract: A position fix identifying a geographic location of a receiver is received. The position fix was generated using signals received at the receiver from respective high-altitude signal sources (such as satellites). Imagery of a geographic area that includes the geographic location is also received. The imagery is automatically processed to determine whether one or more of the high-altitude signal sources were occluded from the geographic location when the position fix was generated. In response to determining that one or more of the high-altitude signal sources were occluded from the geographic location when the position fix was generated, the position fix is identified as being potentially erroneous.
    Type: Application
    Filed: December 15, 2017
    Publication date: June 21, 2018
    Inventors: Fred P. Pighin, Daniel Joseph Filip, Scott Ettinger, Bryan M. Klingner, David R. Martin
  • Publication number: 20180073885
    Abstract: Aspects of the present disclosure relate to generating turn-by-turn direction previews. In one aspect, one or more computing devices, may receive a request for a turn-by-turn direction preview. The one or more computing devices may generate a set of turn-by-turn directions based on a series of road segments connecting a first geographic location and a second geographic location. Each direction in the set of turn-by-turn directions may be associated with a corresponding waypoint. The one or more computing devices then identify a set of images corresponding the series of road segments between two adjacent waypoints of the set of turn-by-turn directions, and determine a subset of the set of images to include in the turn-by-turn direction preview. Subsequently, the one or more computing devices may generate the turn-by-turn direction preview based on at least in part on the determined subset of the set of images.
    Type: Application
    Filed: November 7, 2017
    Publication date: March 15, 2018
    Inventors: Alan Sheridan, Daniel Joseph Filip, Jeremy Bryant Pack
  • Patent number: 9841291
    Abstract: Aspects of the present disclosure relate to generating turn-by-turn direction previews. In one aspect, one or more computing devices, may receive a request for a turn-by-turn direction preview. The one or more computing devices may generate a set of turn-by-turn directions based on a series of road segments connecting a first geographic location and a second geographic location. Each direction in the set of turn-by-turn directions may be associated with a corresponding waypoint. The one or more computing devices then identify a set of images corresponding the series of road segments between two adjacent waypoints of the set of turn-by-turn directions, and determine a subset of the set of images to include in the turn-by-turn direction preview. Subsequently, the one or more computing devices may generate the turn-by-turn direction preview based on at least in part on the determined subset of the set of images.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: December 12, 2017
    Assignee: Google LLC
    Inventors: Alan Sheridan, Daniel Joseph Filip, Jeremy Bryant Pack
  • Patent number: 9836826
    Abstract: Near real-time imagery of a given location may be provided to user upon request. Most popularly viewed geographic locations are determined, and a 360 degree image capture device is positioned at one or more of the determined locations. The image capture device may continually provide image information, which is processed, for example, to remove personal information and filter spam. Such image information may then be provided to users upon request. The image capture device continually captures multiple views of the given location, and the requesting user can select which perspective to view.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: December 5, 2017
    Assignee: Google LLC
    Inventor: Daniel Joseph Filip
  • Patent number: 9830745
    Abstract: In one aspect, a request to generate an automated tour based on a set of panoramic images is received. Each particular panoramic image is associated with geographic location information and linking information linking the particular panoramic image with one or more other panoramic images in the set. A starting panoramic image and a second panoramic image are determined based at least in part on the starting panoramic image and the linking information associated with the starting and second panoramic images. A first transition between the starting panoramic image and the second panoramic image is also determined based at least in part on the linking information for these panoramic images. Additional panoramic images as well as a second transition for between the additional panoramic images are also determined. The determined panoramic images and transitions are added to the tour according to an order of the tour.
    Type: Grant
    Filed: April 21, 2016
    Date of Patent: November 28, 2017
    Assignee: Google LLC
    Inventors: Alan Sheridan, Aaron Michael Donsbach, Daniel Joseph Filip
  • Patent number: 9792021
    Abstract: Methods, systems, and computer program products for transitioning an interface to a related image are provided. A method for transitioning an interface to a related image may include receiving information describing a homography between a first image and a second image, and adjusting the interface to present the second image at one or more transition intervals in a transition period until the second image is fully displayed and the first image is no longer visible. The interface may be adjusted by determining, based on the homography, a region of the second image to overlay onto a corresponding area of the first image, blending the determined region with the corresponding area to reduce visible seams occurring between the first image and the second image, and updating the interface by gradually decreasing visual intensity of the first image while gradually and proportionally increasing visual intensity of the second image.
    Type: Grant
    Filed: June 13, 2014
    Date of Patent: October 17, 2017
    Assignee: Google Inc.
    Inventors: Daniel Joseph Filip, Daniel Cotting
  • Patent number: 9756260
    Abstract: A method and system is disclosed for simulating different types of camera lens on a device by guiding a user through a set of images to be captured in connection with one or more desired lens effects. In one aspect, a wide-angle lens may be simulated by taking a plurality of images that have been taken at a particular location over a set of camera orientations that are determined based on the selection of the wide-angle lens. The mobile device may provide prompts to the user indicating the camera orientations for which images should be captured in order to generate the simulated camera lens effect.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: September 5, 2017
    Assignee: Google Inc.
    Inventors: Scott Ettinger, David Lee, Evan Rapoport, Jacob Mintz, Bryan Feldman, Mikkel Crone Köser, Daniel Joseph Filip
  • Publication number: 20170186229
    Abstract: The technology uses image content to facilitate navigation in panoramic image data. Aspects include providing a first image including a plurality of avatars, in which each avatar corresponds to an object within the first image, and determining an orientation of at least one of the plurality of avatars to a point of interest within the first image. A viewport is determined for a first avatar in accordance with the orientation thereof relative to the point of interest, which is included within the first avatar's viewport. In response to received user input, a second image is selected that includes at least a second avatar and the point of interest from the first image. A viewport of the second avatar in the second image is determined and the second image is oriented to align the second avatar's viewpoint with the point of interest to provide navigation between the first and second images.
    Type: Application
    Filed: March 16, 2017
    Publication date: June 29, 2017
    Inventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
  • Patent number: 9679406
    Abstract: Systems and methods for providing a visualization of satellite sightline obstructions are provided. An example method includes identifying an approximate position of a receiver antenna. The method further includes providing a rendering of a physical environment surrounding the receiver antenna for display within a user interface. The user interface can be provided on a display. Satellite positional data associated with the position of a satellite is accessed and a sightline between the approximate position of the receiver antenna and the position of the satellite is determined. The method further includes presenting the sightline within the user interface in association with the rendering. An example system includes a data capture system and a computing device to provide a visualization of satellite sightline obstructions.
    Type: Grant
    Filed: January 8, 2016
    Date of Patent: June 13, 2017
    Assignee: Google Inc.
    Inventors: Craig Lewin Robinson, James Brian Roseborough, Daniel Joseph Filip
  • Patent number: 9632659
    Abstract: The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: April 25, 2017
    Assignee: Google Inc.
    Inventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
  • Publication number: 20170069121
    Abstract: The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
    Type: Application
    Filed: October 11, 2016
    Publication date: March 9, 2017
    Inventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
  • Publication number: 20170031560
    Abstract: A system and method is provided that displays cursors for street level images, where the cursor changes appearance based on the objects in the image, such as the geographic distance between the objects and the camera position and the surface of the objects. For example, the cursor may appear to lie flat against the objects in the image change size based on the distance between the camera and object's surface.
    Type: Application
    Filed: October 13, 2016
    Publication date: February 2, 2017
    Inventors: Daniel Joseph Filip, Andrew Timothy Szybalski
  • Patent number: 9554060
    Abstract: In one aspect, one or more computing devices may capture a panoramic image. Panoramic images may refer to images having a field of view which is greater than that of the human eye, e.g., 180 degrees or greater. Some panoramic images may provide a 360-degree view of a location. In response to capturing the panoramic image, the one or more computing devices may provide for display a request for a non-panoramic, for example, a zoomed-in image. A zoomed-in image may be captured. An area of the panoramic image that corresponds to the zoomed-in image is determined by the one or more computing devices. The zoomed-in image is associated with the area by the one or more computing devices. In this regard, the panoramic image and the zoomed-in image may be taken close in time such that the images have the same or similar lighting conditions, scenes, etc.
    Type: Grant
    Filed: January 30, 2014
    Date of Patent: January 24, 2017
    Assignee: Google Inc.
    Inventor: Daniel Joseph Filip
  • Patent number: 9473745
    Abstract: Near real-time imagery of a given location may be provided to user upon request. Most popularly viewed geographic locations are determined, and a 360 degree image capture device is positioned at one or more of the determined locations. The image capture device may continually provide image information, which is processed, for example, to remove personal information and filter spam. Such image information may then be provided to users upon request. The image capture device continually captures multiple views of the given location, and the requesting user can select which perspective to view.
    Type: Grant
    Filed: January 30, 2014
    Date of Patent: October 18, 2016
    Assignee: Google Inc.
    Inventor: Daniel Joseph Filip
  • Patent number: 9471597
    Abstract: The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: October 18, 2016
    Assignee: Google Inc.
    Inventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
  • Patent number: 9471834
    Abstract: A system and method are provided for updating imagery associated with map data. A request for map data is received, and a first image of a geographical location corresponding to the map data is provided in response to the request. Information relating to a status of an object in the first image is received, and it is determined whether the first image is to be updated based at least on the received information. If it is determined that the first image is to be updated, an updated image is received and used to update the first image.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: October 18, 2016
    Assignee: Google Inc.
    Inventor: Daniel Joseph Filip
  • Patent number: 9467620
    Abstract: A method and system is disclosed for simulating different types of camera lens on a device by guiding a user through a set of images to be captured in connection with one or more desired lens effects. In one aspect, a wide-angle lens may be simulated by taking a plurality of images that have been taken at a particular location over a set of camera orientations that are determined based on the selection of the wide-angle lens. The mobile device may provide prompts to the user indicating the camera orientations for which images should be captured in order to generate the simulated camera lens effect.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: October 11, 2016
    Assignee: Google Inc.
    Inventors: Scott Ettinger, David Lee, Evan Rapoport, Jake Mintz, Bryan Feldman, Mikkel Crone Köser, Daniel Joseph Filip
  • Publication number: 20160265931
    Abstract: Aspects of the present disclosure relate to generating turn-by-turn direction previews. In one aspect, one or more computing devices, may receive a request for a turn-by-turn direction preview. The one or more computing devices may generate a set of turn-by-turn directions based on a series of road segments connecting a first geographic location and a second geographic location. Each direction in the set of turn-by-turn directions may be associated with a corresponding waypoint. The one or more computing devices then identify a set of images corresponding the series of road segments between two adjacent waypoints of the set of turn-by-turn directions, and determine a subset of the set of images to include in the turn-by-turn direction preview. Subsequently, the one or more computing devices may generate the turn-by-turn direction preview based on at least in part on the determined subset of the set of images.
    Type: Application
    Filed: May 23, 2016
    Publication date: September 15, 2016
    Inventors: Alan Sheridan, Daniel Joseph Filip, Jeremy Bryant Pack