Patents by Inventor James Fung

James Fung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078251
    Abstract: The systems and methods described here can reduce the storage space required (memory and/or disk) to store certain types of data, provide efficient (fast) creation, modification and retrieval of such data, and support such data within the framework of a multi-version database. In some embodiments, the systems and methods can store each field of a set of records as a vector of values, e.g., a data vector. A set of records can be represented using a vector id vector, or “vid” vector, wherein each element of the vid vector contains a reference to the memory location of a data vector. A header table can store associations between labels and “vid” vectors that pertain to those labels. Identical data vectors can be re-used between different record sets or vid vectors needing that vector, thus saving space.
    Type: Application
    Filed: November 13, 2023
    Publication date: March 7, 2024
    Inventors: Robert N. WALKER, James R. CROZMAN, Jansen Donald KRAY, Mosa To Fung YEUNG, James Gordon DAGG
  • Patent number: 10529135
    Abstract: A head mounted display (HMD) adjusts feature tracking parameters based on a power mode of the HMD. Examples of feature tracking parameters that can be adjusted include the number of features identified from captured images, the scale of features identified from captured images, the number of images employed for feature tracking, and the like. By adjusting its feature tracking parameters based on its power mode, the HMD can initiate the feature tracking process in low-power modes and thereby shorted the time for high-fidelity feature tracking when a user initiates a VR or AR experience at the HMD.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: January 7, 2020
    Assignee: GOOGLE LLC
    Inventors: Joel Hesch, Ashish Shah, James Fung
  • Patent number: 10275945
    Abstract: An electronic device includes at least one sensor, a display, and a processor. The processor is configured to determine a dimension of a physical object along an axis based on a change in position of the electronic device when the electronic device is moved from a first end of the physical object along the axis to a second end of the physical object along the axis. A method includes capturing and displaying imagery of a physical object at an electronic device, and receiving user input identifying at least two points of the physical object in the displayed imagery. The method further includes determining, at the electronic device, at least one dimensional aspect of the physical object based on the at least two points of the physical object using a three-dimensional mapping of the physical object.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Johnny Chung Lee, Joel Hesch, Ryan Hickman, Patrick Mihelich, James Fung
  • Patent number: 10154190
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 11, 2018
    Assignee: GOOGLE LLC
    Inventors: Joel Hesch, James Fung
  • Publication number: 20180033201
    Abstract: A head mounted display (HMD) adjusts feature tracking parameters based on a power mode of the HMD. Examples of feature tracking parameters that can be adjusted include the number of features identified from captured images, the scale of features identified from captured images, the number of images employed for feature tracking, and the like. By adjusting its feature tracking parameters based on its power mode, the HMD can initiate the feature tracking process in low-power modes and thereby shorted the time for high-fidelity feature tracking when a user initiates a VR or AR experience at the HMD.
    Type: Application
    Filed: July 27, 2016
    Publication date: February 1, 2018
    Inventors: Joel Hesch, Ashish Shah, James Fung
  • Publication number: 20180035043
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Application
    Filed: October 11, 2017
    Publication date: February 1, 2018
    Inventors: Joel HESCH, James FUNG
  • Publication number: 20170358142
    Abstract: An electronic device includes at least one sensor, a display, and a processor. The processor is configured to determine a dimension of a physical object along an axis based on a change in position of the electronic device when the electronic device is moved from a first end of the physical object along the axis to a second end of the physical object along the axis. A method includes capturing and displaying imagery of a physical object at an electronic device, and receiving user input identifying at least two points of the physical object in the displayed imagery. The method further includes determining, at the electronic device, at least one dimensional aspect of the physical object based on the at least two points of the physical object using a three-dimensional mapping of the physical object.
    Type: Application
    Filed: July 31, 2017
    Publication date: December 14, 2017
    Inventors: Johnny Chung Lee, Joel Hesch, Ryan Hickman, Patrick Mihelich, James Fung
  • Patent number: 9819855
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Grant
    Filed: October 21, 2015
    Date of Patent: November 14, 2017
    Assignee: Google Inc.
    Inventors: Joel Hesch, James Fung
  • Patent number: 9752892
    Abstract: Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: September 5, 2017
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Publication number: 20170118400
    Abstract: An electronic device balances gain and exposure at an imaging sensor of the device based on detected image capture conditions, such as motion of the electronic device, distance of a scene from the electronic device, and predicted illumination conditions for the electronic device. By balancing the gain and exposure, the quality of images captured by the imaging sensor is enhanced, which in turn provides for improved support of location-based functionality.
    Type: Application
    Filed: October 21, 2015
    Publication date: April 27, 2017
    Inventors: Joel Hesch, James Fung
  • Patent number: 9596443
    Abstract: Methods and systems for providing depth data and image data to an application processor on a mobile device are described. An example method involves receiving image data from at least one camera of the mobile device and receiving depth data from a depth processor of the mobile device. The method further involves generating a digital image that includes at least the image data and the depth data. The depth data may be embedded in pixels of the digital image, for instance. Further, the method then involves providing the digital image to an application processor of the mobile device using a camera bus interface. Thus, the depth data and the image data may be provided to the application processor in a single data structure.
    Type: Grant
    Filed: February 25, 2016
    Date of Patent: March 14, 2017
    Assignee: Google Inc.
    Inventors: James Fung, Johnny Lee
  • Patent number: 9485366
    Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.
    Type: Grant
    Filed: March 10, 2016
    Date of Patent: November 1, 2016
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Patent number: 9424619
    Abstract: Methods and systems for detecting frame tears are described. As one example, a mobile device may include at least one camera, a sensor, a co-processor, and an application processor. The co-processor is configured to generate a digital image including image data from the at least one camera and sensor data from the sensor. The co-processor is further configured to embed a frame identifier corresponding to the digital image at least two corner pixels of the digital image. The application processor is configured to receive the digital image from the co-processor, determine a first value embedded in a first corner pixel of the digital image, and determined a second value embedded in a second corner pixel of the digital image. The application processor is also configured to provide an output indicative of a validity of the digital image based on a comparison between the first value and the second value.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: August 23, 2016
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Publication number: 20160191722
    Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.
    Type: Application
    Filed: March 10, 2016
    Publication date: June 30, 2016
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Publication number: 20160173848
    Abstract: Methods and systems for providing depth data and image data to an application processor on a mobile device are described. An example method involves receiving image data from at least one camera of the mobile device and receiving depth data from a depth processor of the mobile device. The method further involves generating a digital image that includes at least the image data and the depth data. The depth data may be embedded in pixels of the digital image, for instance. Further, the method then involves providing the digital image to an application processor of the mobile device using a camera bus interface. Thus, the depth data and the image data may be provided to the application processor in a single data structure.
    Type: Application
    Filed: February 25, 2016
    Publication date: June 16, 2016
    Inventors: James Fung, Johnny Lee
  • Patent number: 9313343
    Abstract: Methods and systems for communicating sensor data on a mobile device are described. An example method involves receiving, by a processor and from an inertial measurement unit (IMU), sensor data corresponding to a first timeframe, and storing the sensor data using a data buffer. The processor may also receive image data and sensor data corresponding to a second timeframe. The processor may then generate a digital image that includes at least the image data corresponding to the second timeframe and the sensor data corresponding to the first timeframe and the second timeframe. The processor may embed the stored sensor data corresponding to the first timeframe and the second timeframe in pixels of the digital image. And the processor may provide the digital image to an application processor of the mobile device.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: April 12, 2016
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Patent number: 9300880
    Abstract: Methods and systems for providing sensor data and image data to an application processor on a mobile device are described. An example method involves receiving image data from at least one camera of the mobile device and receiving sensor data from an inertial measurement (IMU) of the mobile device. The method further involves generating a digital image that includes at least the image data and the sensor data. The sensor data may be embedded in pixels of the digital image, for instance. Further, the method then involves providing the digital image to an application processor of the mobile device using a camera bus interface. Thus, the sensor data and the image data may be provided to the application processor in a single data structure.
    Type: Grant
    Filed: December 31, 2013
    Date of Patent: March 29, 2016
    Assignee: Google Technology Holdings LLC
    Inventors: James Fung, Johnny Lee
  • Patent number: 9277361
    Abstract: Methods and systems for cross-validating sensor data are described. An example method involves receiving image data and first timing information associated with the image data, and receiving sensor data and second timing information associated with the sensor data. The method further involves determining a first estimation of motion of the mobile device based on the image data and the first timing information, and determining a second estimation of the motion of the mobile device based on the sensor data and the second timing information. Additionally, the method involves determining whether the first estimation is within a threshold variance of the second estimation. The method then involves providing an output indicative of a validity of the first timing information and the second timing information based on whether the first estimation is within the threshold variance of the second estimation.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: March 1, 2016
    Assignee: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Publication number: 20150237479
    Abstract: Methods and systems for cross-validating sensor data are described. An example method involves receiving image data and first timing information associated with the image data, and receiving sensor data and second timing information associated with the sensor data. The method further involves determining a first estimation of motion of the mobile device based on the image data and the first timing information, and determining a second estimation of the motion of the mobile device based on the sensor data and the second timing information. Additionally, the method involves determining whether the first estimation is within a threshold variance of the second estimation. The method then involves providing an output indicative of a validity of the first timing information and the second timing information based on whether the first estimation is within the threshold variance of the second estimation.
    Type: Application
    Filed: February 20, 2014
    Publication date: August 20, 2015
    Applicant: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee
  • Publication number: 20150233743
    Abstract: Methods and systems for acquiring sensor data using multiple acquisition modes are described. An example method involves receiving, by a co-processor and from an application processor, a request for sensor data. The request identifies at least two sensors of a plurality of sensors for which data is requested. The at least two sensors are configured to acquire sensor data in a plurality of acquisition modes, and the request further identifies for the at least two sensors respective acquisition modes for acquiring data that are selected from among the plurality of acquisition modes. In response to receiving the request, the co-processor causes the at least two sensors to acquire data in the respective acquisition modes. The co-processor receives first sensor data from a first sensor and second sensor data from a second sensor, and the co-processor provides the first sensor data and the second sensor data to the application processor.
    Type: Application
    Filed: February 20, 2014
    Publication date: August 20, 2015
    Applicant: Google Inc.
    Inventors: James Fung, Joel Hesch, Johnny Lee