Patents by Inventor Joel Hesch

Joel Hesch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11195049
    Abstract: An electronic device includes one or more imaging cameras. After a reset of the device or other specified event, the electronic device identifies an estimate of the device's pose based on location data such as Global Positioning System (GPS) data, cellular tower triangulation data, wireless network address location data, and the like. The one or more imaging cameras may be used to capture imagery of the local environment of the electronic device, and this imagery is used to refine the estimated pose to identify a refined pose of the electronic device. The refined pose may be used to identify additional imagery information, such as environmental features, that can be used to enhance the location based functionality of the electronic device.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: December 7, 2021
    Assignee: Google LLC
    Inventors: Joel Hesch, Esha Nerurkar, Patrick Mihelich
  • Patent number: 10890600
    Abstract: Fault detection for real-time visual-inertial odometry motion tracking. A fault detection system allows immediate detection of error when the motion of a device cannot be accurately determined. The system includes subdetectors that operate independently and in parallel to a main system on a device to determine if a condition exists which results in a main system error. Each subdetector covers a phase of a six-degrees of freedom (6DOF) estimation. If any of the subdetectors detect an error, a fault is output to the main system to indicate a motion tracking failure.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: January 12, 2021
    Assignee: Google LLC
    Inventors: Mingyang Li, Joel Hesch, Zachary Moratto
  • Patent number: 10852847
    Abstract: A method for controller tracking with multiple degrees of freedom includes generating depth data at an electronic device based on a local environment proximate the electronic device. A set of positional data is generated for at least one spatial feature associated with a controller based on a pose of the electronic device, as determined using the depth data, relative to the at least one spatial feature associated with the controller. A set of rotational data is received that represents three degrees-of-freedom (3DoF) orientation of the controller within the local environment, and a six degrees-of-freedom (6DoF) position of the controller within the local environment is tracked based on the set of positional data and the set of rotational data.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: December 1, 2020
    Assignee: Google LLC
    Inventors: Joel Hesch, Shiqi Chen, Johnny Lee, Rahul Garg
  • Patent number: 10529135
    Abstract: A head mounted display (HMD) adjusts feature tracking parameters based on a power mode of the HMD. Examples of feature tracking parameters that can be adjusted include the number of features identified from captured images, the scale of features identified from captured images, the number of images employed for feature tracking, and the like. By adjusting its feature tracking parameters based on its power mode, the HMD can initiate the feature tracking process in low-power modes and thereby shorted the time for high-fidelity feature tracking when a user initiates a VR or AR experience at the HMD.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: January 7, 2020
    Assignee: GOOGLE LLC
    Inventors: Joel Hesch, Ashish Shah, James Fung
  • Patent number: 10275945
    Abstract: An electronic device includes at least one sensor, a display, and a processor. The processor is configured to determine a dimension of a physical object along an axis based on a change in position of the electronic device when the electronic device is moved from a first end of the physical object along the axis to a second end of the physical object along the axis. A method includes capturing and displaying imagery of a physical object at an electronic device, and receiving user input identifying at least two points of the physical object in the displayed imagery. The method further includes determining, at the electronic device, at least one dimensional aspect of the physical object based on the at least two points of the physical object using a three-dimensional mapping of the physical object.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Johnny Chung Lee, Joel Hesch, Ryan Hickman, Patrick Mihelich, James Fung
  • Patent number: 10203209
    Abstract: A method includes: receiving, with a computing platform, respective trajectory data and map data independently generated by each of a plurality of vision-aided inertial navigation devices (VINS devices) traversing an environment, wherein the trajectory data specifies poses along a path through the environment for the respective VINS device and the map data specifies positions of observed features within the environment as determined by an estimator executed by the respective VINS device; determining, with the computing platform and based on the respective trajectory data and map data from each of the VINS devices, estimates for relative poses within the environment by determining transformations that geometrically relate the trajectory data and the map data between one or more pairs of the VINS devices; and generating, with the computing platform and based on the transformations, a composite map specifying positions within the environment for the features observed by the VINS devices.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: February 12, 2019
    Assignee: Regents of the University of Minnesota
    Inventors: Stergios I. Roumeliotis, Esha D. Nerurkar, Joel Hesch, Chao Guo, Ryan C. DuToit, Kourosh Sartipi, Georgios Georgiou
  • Publication number: 20190033988
    Abstract: A method for controller tracking with multiple degrees of freedom includes generating depth data at an electronic device based on a local environment proximate the electronic device. A set of positional data is generated for at least one spatial feature associated with a controller based on a pose of the electronic device, as determined using the depth data, relative to the at least one spatial feature associated with the controller. A set of rotational data is received that represents three degrees-of-freedom (3DoF) orientation of the controller within the local environment, and a six degrees-of-freedom (6DoF) position of the controller within the local environment is tracked based on the set of positional data and the set of rotational data.
    Type: Application
    Filed: July 26, 2017
    Publication date: January 31, 2019
    Inventors: Joel Hesch, Shiqi Chen, Johnny Lee, Rahul Garg
  • Patent number: 10154190
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 11, 2018
    Assignee: GOOGLE LLC
    Inventors: Joel Hesch, James Fung
  • Patent number: 10073531
    Abstract: An electronic device includes one or more imaging sensors (e.g, imaging cameras) and includes one or more non-image sensors, such as an inertial measurement unit (IMU), that can provide information indicative of the pose of the electronic device. The electronic device estimates its pose based on two independent sources of pose information: pose information generated at a relatively high rate based on non-visual information generated by the non-image sensors and pose information generated at a relatively low rate based on imagery captured by the one or more imaging sensors. To achieve both a high pose-estimation rate and high degree of pose estimation accuracy, the electronic device adjusts a pose estimate based on the non-visual pose information at a high rate, and at a lower rate spatially smoothes the pose estimate based on the visual pose information.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: September 11, 2018
    Assignee: Google LLC
    Inventors: Joel Hesch, Esha Nerurkar
  • Publication number: 20180211137
    Abstract: An electronic device includes one or more imaging cameras. After a reset of the device or other specified event, the electronic device identifies an estimate of the device's pose based on location data such as Global Positioning System (GPS) data, cellular tower triangulation data, wireless network address location data, and the like. The one or more imaging cameras may be used to capture imagery of the local environment of the electronic device, and this imagery is used to refine the estimated pose to identify a refined pose of the electronic device. The refined pose may be used to identify additional imagery information, such as environmental features, that can be used to enhance the location based functionality of the electronic device.
    Type: Application
    Filed: March 16, 2018
    Publication date: July 26, 2018
    Inventors: Joel HESCH, Esha NERURKAR, Patrick MIHELICH
  • Patent number: 9990547
    Abstract: Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: June 5, 2018
    Assignee: Google LLC
    Inventors: Johnny Lee, Joel Hesch
  • Patent number: 9953243
    Abstract: An electronic device includes one or more imaging cameras. After a reset of the device or other specified event, the electronic device identifies an estimate of the device's pose based on location data such as Global Positioning System (GPS) data, cellular tower triangulation data, wireless network address location data, and the like. The one or more imaging cameras may be used to capture imagery of the local environment of the electronic device, and this imagery is used to refine the estimated pose to identify a refined pose of the electronic device. The refined pose may be used to identify additional imagery information, such as environmental features, that can be used to enhance the location based functionality of the electronic device.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: April 24, 2018
    Assignee: Google LLC
    Inventors: Joel Hesch, Esha Nerurkar, Patrick Mihelich
  • Patent number: 9940542
    Abstract: An electronic device reduces localization data based on feature characteristics identified from the data. Based on the feature characteristics, a quality value can be assigned to each identified feature, indicating the likelihood that the data associated with the feature will be useful in mapping a local environment of the electronic device. The localization data is reduced by removing data associated with features have a low quality value, and the reduced localization data is used to map the local environment of the device by locating features identified from the reduced localization data in a frame of reference for the electronic device.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: April 10, 2018
    Assignee: Google LLC
    Inventors: Esha Nerurkar, Joel Hesch, Simon Lynen
  • Publication number: 20180033201
    Abstract: A head mounted display (HMD) adjusts feature tracking parameters based on a power mode of the HMD. Examples of feature tracking parameters that can be adjusted include the number of features identified from captured images, the scale of features identified from captured images, the number of images employed for feature tracking, and the like. By adjusting its feature tracking parameters based on its power mode, the HMD can initiate the feature tracking process in low-power modes and thereby shorted the time for high-fidelity feature tracking when a user initiates a VR or AR experience at the HMD.
    Type: Application
    Filed: July 27, 2016
    Publication date: February 1, 2018
    Inventors: Joel Hesch, Ashish Shah, James Fung
  • Publication number: 20180035043
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Application
    Filed: October 11, 2017
    Publication date: February 1, 2018
    Inventors: Joel HESCH, James FUNG
  • Publication number: 20170358142
    Abstract: An electronic device includes at least one sensor, a display, and a processor. The processor is configured to determine a dimension of a physical object along an axis based on a change in position of the electronic device when the electronic device is moved from a first end of the physical object along the axis to a second end of the physical object along the axis. A method includes capturing and displaying imagery of a physical object at an electronic device, and receiving user input identifying at least two points of the physical object in the displayed imagery. The method further includes determining, at the electronic device, at least one dimensional aspect of the physical object based on the at least two points of the physical object using a three-dimensional mapping of the physical object.
    Type: Application
    Filed: July 31, 2017
    Publication date: December 14, 2017
    Inventors: Johnny Chung Lee, Joel Hesch, Ryan Hickman, Patrick Mihelich, James Fung
  • Publication number: 20170343356
    Abstract: A method includes: receiving, with a computing platform, respective trajectory data and map data independently generated by each of a plurality of vision-aided inertial navigation devices (VINS devices) traversing an environment, wherein the trajectory data specifies poses along a path through the environment for the respective VINS device and the map data specifies positions of observed features within the environment as determined by an estimator executed by the respective VINS device; determining, with the computing platform and based on the respective trajectory data and map data from each of the VINS devices, estimates for relative poses within the environment by determining transformations that geometrically relate the trajectory data and the map data between one or more pairs of the VINS devices; and generating, with the computing platform and based on the transformations, a composite map specifying positions within the environment for the features observed by the VINS devices.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 30, 2017
    Inventors: Stergios I. Roumeliotis, Esha D. Nerurkar, Joel Hesch, Chao Guo, Ryan C. DuToit, Kourosh Sartipi, Georgios Georgiou
  • Publication number: 20170336439
    Abstract: Fault detection for real-time visual-inertial odometry motion tracking. A fault detection system allows immediate detection of error when the motion of a device cannot be accurately determined. The system includes subdetectors that operate independently and in parallel to a main system on a device to determine if a condition exists which results in a main system error. Each subdetector covers a phase of a six-degrees of freedom (6DOF) estimation. If any of the subdetectors detect an error, a fault is output to the main system to indicate a motion tracking failure.
    Type: Application
    Filed: May 16, 2017
    Publication date: November 23, 2017
    Inventors: Mingyang Li, Joel Hesch, Zachary Moratto
  • Patent number: 9819855
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Grant
    Filed: October 21, 2015
    Date of Patent: November 14, 2017
    Assignee: Google Inc.
    Inventors: Joel Hesch, James Fung
  • Patent number: 9752892
    Abstract: Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: September 5, 2017
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee