Patents by Inventor Simon Lynen

Simon Lynen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230360242
    Abstract: An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose.
    Type: Application
    Filed: July 20, 2023
    Publication date: November 9, 2023
    Inventors: Esha Nerurkar, Simon Lynen, Sheng Zhao
  • Patent number: 11734846
    Abstract: An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: August 22, 2023
    Assignee: GOOGLE LLC
    Inventors: Esha Nerurkar, Simon Lynen, Sheng Zhao
  • Patent number: 10937214
    Abstract: An electronic device merges a plurality of maps, or area description files (ADFs), by representing relationships among ADFs in an undirected graph, with vertices representing maps and edges representing transformations between maps. As the electronic device generates new ADFs, the electronic device merges each new ADF to a stored collection of ADFs by adding each new ADF as a vertex and transformations between the new ADF and the collection of ADFs as edges in the undirected graph. In this way, the map merger can use the undirected graph to more accurately represent the relations between any two maps, allowing more efficient merger of new maps to a previously stored collection of maps, and allowing for the development of more flexible and efficient algorithms for manipulating the merged maps.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: March 2, 2021
    Assignee: GOOGLE LLC
    Inventors: Esha Nerurkar, Simon Lynen, Dongfang Zheng
  • Patent number: 10937249
    Abstract: The present disclosure provides systems and methods that allow a first user to anchor a virtual object to a physical location and a second user to view the virtual object at the physical location. In particular, aspects of the present disclosure are directed to workflows, application programming interfaces (“APIs”), and computing system infrastructure that enable a first computing device to generate an ad hoc mapping of its physical surroundings (e.g., based on sensor data collected by the first device) and to position and anchor a virtual object at a physical location within such mapping. Data descriptive of the mapping and virtual object can be stored in a cloud database and can then be used by or for a second device to render the virtual object at the location of the anchor.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: March 2, 2021
    Assignee: Google LLC
    Inventors: Simon Lynen, Eric Thomas Andresen
  • Patent number: 10802147
    Abstract: An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: October 13, 2020
    Assignee: GOOGLE LLC
    Inventors: Esha Nerurkar, Simon Lynen, Sheng Zhao
  • Publication number: 20200278449
    Abstract: An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose.
    Type: Application
    Filed: May 15, 2020
    Publication date: September 3, 2020
    Inventors: Esha Nerurkar, Simon Lynen, Sheng Zhao
  • Publication number: 20190340836
    Abstract: The present disclosure provides systems and methods that allow a first user to anchor a virtual object to a physical location and a second user to view the virtual object at the physical location. In particular, aspects of the present disclosure are directed to workflows, application programming interfaces (“APIs”), and computing system infrastructure that enable a first computing device to generate an ad hoc mapping of its physical surroundings (e.g., based on sensor data collected by the first device) and to position and anchor a virtual object at a physical location within such mapping. Data descriptive of the mapping and virtual object can be stored in a cloud database and can then be used by or for a second device to render the virtual object at the location of the anchor.
    Type: Application
    Filed: May 7, 2019
    Publication date: November 7, 2019
    Inventors: Simon Lynen, Eric Thomas Andresen
  • Patent number: 10339708
    Abstract: An electronic device generates a summary map of a scene based on data representative of objects having a high utility for identifying the scene when estimating a current pose of the electronic device and localizes the estimated current pose with respect to the summary map. The electronic device identifies scenes based on groups of objects appearing together in consistent configurations over time, and identifies utility weights for objects appearing in scenes, wherein the utility weights indicate a predicted likelihood that the corresponding object will be persistently identifiable by the electronic device in the environment over time and are based at least in part on verification by one or more mobile devices. The electronic device generates a summary map of each scene based on data representative of objects having utility weights above a threshold.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: July 2, 2019
    Inventors: Simon Lynen, Bernhard Zeisl
  • Publication number: 20180276863
    Abstract: An electronic device merges a plurality of maps, or area description files (ADFs), by representing relationships among ADFs in an undirected graph, with vertices representing maps and edges representing transformations between maps. As the electronic device generates new ADFs, the electronic device merges each new ADF to a stored collection of ADFs by adding each new ADF as a vertex and transformations between the new ADF and the collection of ADFs as edges in the undirected graph. In this way, the map merger can use the undirected graph to more accurately represent the relations between any two maps, allowing more efficient merger of new maps to a previously stored collection of maps, and allowing for the development of more flexible and efficient algorithms for manipulating the merged maps.
    Type: Application
    Filed: March 22, 2017
    Publication date: September 27, 2018
    Inventors: Esha Nerurkar, Simon Lynen, Dongfang Zheng
  • Publication number: 20180122136
    Abstract: An electronic device generates a summary map of a scene based on data representative of objects having a high utility for identifying the scene when estimating a current pose of the electronic device and localizes the estimated current pose with respect to the summary map. The electronic device identifies scenes based on groups of objects appearing together in consistent configurations over time, and identifies utility weights for objects appearing in scenes, wherein the utility weights indicate a predicted likelihood that the corresponding object will be persistently identifiable by the electronic device in the environment over time and are based at least in part on verification by one or more mobile devices. The electronic device generates a summary map of each scene based on data representative of objects having utility weights above a threshold.
    Type: Application
    Filed: July 25, 2017
    Publication date: May 3, 2018
    Inventors: Simon Lynen, Bernhard Zeisl
  • Patent number: 9940542
    Abstract: An electronic device reduces localization data based on feature characteristics identified from the data. Based on the feature characteristics, a quality value can be assigned to each identified feature, indicating the likelihood that the data associated with the feature will be useful in mapping a local environment of the electronic device. The localization data is reduced by removing data associated with features have a low quality value, and the reduced localization data is used to map the local environment of the device by locating features identified from the reduced localization data in a frame of reference for the electronic device.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: April 10, 2018
    Assignee: Google LLC
    Inventors: Esha Nerurkar, Joel Hesch, Simon Lynen
  • Publication number: 20170336511
    Abstract: An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used to correct drift in the tracked motion. A motion tracking module estimates poses of the electronic device based on feature descriptors corresponding to the visual appearance of spatial features of objects in the environment. A mapping module builds a three-dimensional visual representation of the environment based on a stored plurality of maps, and feature descriptors and estimated device poses received from the motion tracking module. The mapping module provides the three-dimensional visual representation of the environment to a localization module, which identifies correspondences between stored and observed feature descriptors. The localization module performs a loop closure by minimizing the discrepancies between matching feature descriptors to compute a localized pose.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 23, 2017
    Inventors: Esha Nerurkar, Simon Lynen, Sheng Zhao
  • Patent number: 9811734
    Abstract: A computing system includes a network interface, a first datastore, a second datastore, and a merge module. The merge module is to receive a set of one or more area description files from a set of one or more first mobile devices. Each area description file represents a point cloud of spatial features detected by a corresponding first mobile device at an area. The computing system further includes a localization module and a query module. The localization generation module is to generate a localization area description file for the area from the set of one or more area description files and to store the localization area description file in the second datastore. The localization area description file represents a point cloud of spatial features for the area. The query module is to provide the localization area description file to a second mobile device via the network interface.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: November 7, 2017
    Assignee: Google Inc.
    Inventors: Brian Patrick Williams, Ryan Michael Hickman, Laurent Tu, Esha Nerurkar, Simon Lynen
  • Publication number: 20170046594
    Abstract: An electronic device reduces localization data based on feature characteristics identified from the data. Based on the feature characteristics, a quality value can be assigned to each identified feature, indicating the likelihood that the data associated with the feature will be useful in mapping a local environment of the electronic device. The localization data is reduced by removing data associated with features have a low quality value, and the reduced localization data is used to map the local environment of the device by locating features identified from the reduced localization data in a frame of reference for the electronic device.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Esha Nerurkar, Joel Hesch, Simon Lynen
  • Publication number: 20160335497
    Abstract: A computing system includes a network interface, a first datastore, a second datastore, and a merge module. The merge module is to receive a set of one or more area description files from a set of one or more first mobile devices. Each area description file represents a point cloud of spatial features detected by a corresponding first mobile device at an area. The computing system further includes a localization module and a query module. The localization generation module is to generate a localization area description file for the area from the set of one or more area description files and to store the localization area description file in the second datastore. The localization area description file represents a point cloud of spatial features for the area. The query module is to provide the localization area description file to a second mobile device via the network interface.
    Type: Application
    Filed: May 11, 2015
    Publication date: November 17, 2016
    Inventors: Brian Patrick Williams, Ryan Michael Hickman, Laurent Tu, Esha Nerurkar, Simon Lynen
  • Publication number: 20160335275
    Abstract: A computing system includes a datastore, a network interface, and a query module. The datastore stores a plurality of localization area description files. The network interface is to receive a request for a localization area description file from a mobile device, the request comprising a set of spatial features and at least one non-image location indicator. The query module includes a query interface to identify one or more candidate localization area description files based on one of the set of spatial features of the request and the at least one location indicator of the request, and includes a selection module to select a localization area description file from the candidate localization area description files based on the other of the set of spatial features of the request and the at least one location indicator. The query module is to provide the selected localization area description file to the mobile device.
    Type: Application
    Filed: May 11, 2015
    Publication date: November 17, 2016
    Inventors: Brian Patrick Williams, Laurent Tu, Esha Nerurkar, Simon Lynen
  • Patent number: 9303999
    Abstract: Methods and systems for determining estimation of motion of a device are provided. An example method includes receiving data from an inertial measurement unit (IMU) of a device and receiving images from a camera of the device for a sliding time window. The method also includes determining an IMU estimation of motion of the device based on the data from the IMU, and a camera estimation of motion of the device based on feature tracking in the images. The method includes, based on the IMU estimation and the camera estimation having a difference more than a threshold amount, determining one or more of a position or a velocity of the device for the sliding time window, and determining an overall estimation of motion of the device as supported by the data from the IMU and the position or velocity of the device.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: April 5, 2016
    Assignee: Google Technology Holdings LLC
    Inventors: Joel Hesch, Johnny Lee, Simon Lynen
  • Publication number: 20150193971
    Abstract: Methods and systems for map generation are described. A computing device may receive outputs from a plurality of sensors at a position of the device in an environment, which may include data corresponding to visual features of the environment at the first position. Based on correspondence in the outputs from the plurality of sensors, the computing device may generate a map of the environment comprising sparse mapping data, and the sparse mapping data comprises the data corresponding to the visual features. The device may receive additional outputs at other positions of the device in the environment and may modify the map based on the additional outputs. In addition, the device may modify the map based on receiving dense mapping information from sensors, which may include data corresponding to objects in the environment in a manner such that represents a structure of the object in the environment.
    Type: Application
    Filed: January 3, 2014
    Publication date: July 9, 2015
    Applicant: Motorola Mobility LLC
    Inventors: Ivan Dryanovski, Simon Lynen, Joel Hesch
  • Publication number: 20150185018
    Abstract: Methods and systems for determining estimation of motion of a device are provided. An example method includes receiving data from an inertial measurement unit (IMU) of a device and receiving images from a camera of the device for a sliding time window. The method also includes determining an IMU estimation of motion of the device based on the data from the IMU, and a camera estimation of motion of the device based on feature tracking in the images. The method includes, based on the IMU estimation and the camera estimation having a difference more than a threshold amount, determining one or more of a position or a velocity of the device for the sliding time window, and determining an overall estimation of motion of the device as supported by the data from the IMU and the position or velocity of the device.
    Type: Application
    Filed: December 30, 2013
    Publication date: July 2, 2015
    Applicant: Motorola Mobility LLC
    Inventors: Joel Hesch, Johnny Lee, Simon Lynen