Patents by Inventor Joel Hesch
Joel Hesch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20170118400Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.Type: ApplicationFiled: October 21, 2015Publication date: April 27, 2017Inventors: Joel Hesch, James Fung
-
Publication number: 20170102772Abstract: An electronic device includes one or more imaging sensors (e.g, imaging cameras) and includes one or more non-image sensors, such as an inertial measurement unit (IMU), that can provide information indicative of the pose of the electronic device. The electronic device estimates its pose based on two independent sources of pose information: pose information generated at a relatively high rate based on non-visual information generated by the non-image sensors and pose information generated at a relatively low rate based on imagery captured by the one or more imaging sensors. To achieve both a high pose-estimation rate and high degree of pose estimation accuracy, the electronic device adjusts a pose estimate based on the non-visual pose information at a high rate, and at a lower rate spatially smoothes the pose estimate based on the visual pose information.Type: ApplicationFiled: October 7, 2015Publication date: April 13, 2017Inventors: Joel Hesch, Esha Nerurkar
-
Publication number: 20170046594Abstract: An electronic device reduces localization data based on feature characteristics identified from the data. Based on the feature characteristics, a quality value can be assigned to each identified feature, indicating the likelihood that the data associated with the feature will be useful in mapping a local environment of the electronic device. The localization data is reduced by removing data associated with features have a low quality value, and the reduced localization data is used to map the local environment of the device by locating features identified from the reduced localization data in a frame of reference for the electronic device.Type: ApplicationFiled: August 11, 2015Publication date: February 16, 2017Inventors: Esha Nerurkar, Joel Hesch, Simon Lynen
-
Publication number: 20170018092Abstract: Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.Type: ApplicationFiled: August 15, 2016Publication date: January 19, 2017Inventors: Johnny Lee, Joel Hesch
-
Patent number: 9485366Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.Type: GrantFiled: March 10, 2016Date of Patent: November 1, 2016Assignee: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Patent number: 9445015Abstract: Example methods and systems for adjusting sensor viewpoint to a virtual viewpoint are provided. An example method may involve receiving data from a first camera; receiving data from a second camera; transforming, from the first viewpoint to a virtual viewpoint within the device, frames in a first plurality of frames based on an offset from the first camera to the virtual viewpoint; determining, in a second plurality of frames, one or more features and a movement, relative to the second viewpoint, of the one or more features; and transforming, from the second viewpoint to the virtual viewpoint, the movement of the one or more features based on an offset from the second camera to the virtual viewpoint; adjusting the transformed frames of the virtual viewpoint by an amount that is proportional to the transformed movement; and providing for display the adjusted and transformed frames of the first plurality of frames.Type: GrantFiled: February 20, 2014Date of Patent: September 13, 2016Assignee: GOOGLE INC.Inventors: Joel Hesch, Ryan Hickman, Johnny Lee
-
Patent number: 9437000Abstract: Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.Type: GrantFiled: February 20, 2014Date of Patent: September 6, 2016Assignee: Google Inc.Inventors: Johnny Lee, Joel Hesch
-
Patent number: 9424619Abstract: Methods and systems for detecting frame tears are described. As one example, a mobile device may include at least one camera, a sensor, a co-processor, and an application processor. The co-processor is configured to generate a digital image including image data from the at least one camera and sensor data from the sensor. The co-processor is further configured to embed a frame identifier corresponding to the digital image at least two corner pixels of the digital image. The application processor is configured to receive the digital image from the co-processor, determine a first value embedded in a first corner pixel of the digital image, and determined a second value embedded in a second corner pixel of the digital image. The application processor is also configured to provide an output indicative of a validity of the digital image based on a comparison between the first value and the second value.Type: GrantFiled: February 20, 2014Date of Patent: August 23, 2016Assignee: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Publication number: 20160191722Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.Type: ApplicationFiled: March 10, 2016Publication date: June 30, 2016Inventors: James Fung, Joel Hesch, Johnny Lee
-
Patent number: 9313343Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.Type: GrantFiled: February 20, 2014Date of Patent: April 12, 2016Assignee: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Patent number: 9303999Abstract: Methods and systems for determining estimation of motion of a device are provided. An example method includes receiving data from an inertial measurement unit (IMU) of a device and receiving images from a camera of the device for a sliding time window. The method also includes determining an IMU estimation of motion of the device based on the data from the IMU, and a camera estimation of motion of the device based on feature tracking in the images. The method includes, based on the IMU estimation and the camera estimation having a difference more than a threshold amount, determining one or more of a position or a velocity of the device for the sliding time window, and determining an overall estimation of motion of the device as supported by the data from the IMU and the position or velocity of the device.Type: GrantFiled: December 30, 2013Date of Patent: April 5, 2016Assignee: Google Technology Holdings LLCInventors: Joel Hesch, Johnny Lee, Simon Lynen
-
Patent number: 9277361Abstract: Methods and systems for cross-validating sensor data are described. An example method involves receiving image data and first timing information associated with the image data, and receiving sensor data and second timing information associated with the sensor data. The method further involves determining a first estimation of motion of the mobile device based on the image data and the first timing information, and determining a second estimation of the motion of the mobile device based on the sensor data and the second timing information. Additionally, the method involves determining whether the first estimation is within a threshold variance of the second estimation. The method then involves providing an output indicative of a validity of the first timing information and the second timing information based on whether the first estimation is within the threshold variance of the second estimation.Type: GrantFiled: February 20, 2014Date of Patent: March 1, 2016Assignee: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Patent number: 9243916Abstract: This disclosure describes techniques for reducing or eliminating estimator inconsistency in vision-aided inertial navigation systems (VINS). For example, an observability-constrained VINS (OC-VINS) is described which enforce the unobservable directions of the system to prevent spurious information gain and reduce inconsistency.Type: GrantFiled: February 21, 2014Date of Patent: January 26, 2016Assignee: Regents of the University of MinnesotaInventors: Stergios I. Roumeliotis, Dimitrios G. Kottas, Chao Guo, Joel Hesch
-
Publication number: 20150362579Abstract: Methods and systems for sensor calibration are described. An example method involves receiving image data from a first sensor and sensor data associated with the image data from a second sensor. The image data includes data representative of a target object. The method further involves determining an object identification for the target object based on the captured image data. Additionally, the method includes retrieving object data based on the object identification, where the object data includes data related to a three-dimensional representation of the target object. Additionally, the method includes determining a predicted sensor value based on the based on the object data and the image data. Further, the method includes determining a sensor calibration value based on a different between the received sensor data and the predicted sensor value. Moreover, the method includes adjusting the second sensor based on the sensor calibration value.Type: ApplicationFiled: June 12, 2014Publication date: December 17, 2015Inventor: Joel Hesch
-
Publication number: 20150310310Abstract: An electronic device includes one or more imaging cameras. After a reset of the device or other specified event, the electronic device identifies an estimate of the device's pose based on location data such as Global Positioning System (GPS) data, cellular tower triangulation data, wireless network address location data, and the like. The one or more imaging cameras may be used to capture imagery of the local environment of the electronic device, and this imagery is used to refine the estimated pose to identify a refined pose of the electronic device. The refined pose may be used to identify additional imagery information, such as environmental features, that can be used to enhance the location based functionality of the electronic device.Type: ApplicationFiled: April 24, 2015Publication date: October 29, 2015Inventors: Joel Hesch, Esha Nerurkar, Patrick Mihelich
-
Publication number: 20150235335Abstract: Methods and systems for detecting frame tears are described. As one example, a mobile device may include at least one camera, a sensor, a co-processor, and an application processor. The co-processor is configured to generate a digital image including image data from the at least one camera and sensor data from the sensor. The co-processor is further configured to embed a frame identifier corresponding to the digital image at least two corner pixels of the digital image. The application processor is configured to receive the digital image from the co-processor, determine a first value embedded in a first corner pixel of the digital image, and determined a second value embedded in a second corner pixel of the digital image. The application processor is also configured to provide an output indicative of a validity of the digital image based on a comparison between the first value and the second value.Type: ApplicationFiled: February 20, 2014Publication date: August 20, 2015Applicant: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Publication number: 20150237479Abstract: Methods and systems for cross-validating sensor data are described. An example method involves receiving image data and first timing information associated with the image data, and receiving sensor data and second timing information associated with the sensor data. The method further involves determining a first estimation of motion of the mobile device based on the image data and the first timing information, and determining a second estimation of the motion of the mobile device based on the sensor data and the second timing information. Additionally, the method involves determining whether the first estimation is within a threshold variance of the second estimation. The method then involves providing an output indicative of a validity of the first timing information and the second timing information based on whether the first estimation is within the threshold variance of the second estimation.Type: ApplicationFiled: February 20, 2014Publication date: August 20, 2015Applicant: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Publication number: 20150233743Abstract: Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.Type: ApplicationFiled: February 20, 2014Publication date: August 20, 2015Applicant: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Publication number: 20150237223Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.Type: ApplicationFiled: February 20, 2014Publication date: August 20, 2015Applicant: Google Inc.Inventors: James Fung, Joel Hesch, Johnny Lee
-
Publication number: 20150235099Abstract: Methods and systems for determining features of interest for following within various frames of data received from multiple sensors of a device are disclosed. An example method may include receiving data from a plurality of sensors of a device. The method may also include determining, based on the data, motion data that is indicative of a movement of the device in an environment. The method may also include as the device moves in the environment, receiving image data from a camera of the device. The method may additionally include selecting, based at least in part on the motion data, features in the image data for feature-following. The method may further include estimating one or more of a position of the device or a velocity of the device in the environment as supported by the data from the plurality of sensors and feature-following of the selected features in the images.Type: ApplicationFiled: February 20, 2014Publication date: August 20, 2015Applicant: Google Inc.Inventors: Johnny Lee, Joel Hesch