Patents by Inventor Jonathan Daniel Ventura

Jonathan Daniel Ventura has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9495761
    Abstract: Various embodiments each include at least one of systems, methods, devices, and software for environment mapping with automatic motion model selection. One embodiment in the form of a method includes receiving a video frame captured by a camera device into memory and estimating a type of motion from a previously received video frame held in memory to the received video frame. When the type of motion is the same as motion type of a current keyframe group held in memory, the method includes adding the received video frame to the current keyframe group. Conversely, when the type of motion is not the same motion type of the current keyframe group held in memory, the method includes creating a new keyframe group in memory and adding the received video frame to the new keyframe group.
    Type: Grant
    Filed: November 3, 2014
    Date of Patent: November 15, 2016
    Assignee: The Regents of the University of California
    Inventors: Steffen Gauglitz, Christopher Michael Sweeney, Jonathan Daniel Ventura, Matthew Alan Turk, Tobias Höllerer
  • Publication number: 20150262412
    Abstract: Methods for determination of AR lighting with dynamic geometry are disclosed. A camera pose for a first image comprising a plurality of pixels may be determined, where each pixel in the first image comprises a depth value and a color value. The first image may correspond to a portion of a 3D model. A second image may be obtained by projecting the portion of the 3D model into a camera field of view based on the camera pose. A composite image comprising a plurality of composite pixels may be obtained based, in part, on the first image and the second image, where each composite pixel in a subset of the plurality of composite pixels is obtained, based, in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image.
    Type: Application
    Filed: January 9, 2015
    Publication date: September 17, 2015
    Inventors: Lukas Gruber, Dieter Schmalstieg, Jonathan Daniel Ventura
  • Publication number: 20150125045
    Abstract: Various embodiments each include at least one of systems, methods, devices, and software for environment mapping with automatic motion model selection. One embodiment in the form of a method includes receiving a video frame captured by a camera device into memory and estimating a type of motion from a previously received video frame held in memory to the received video frame. When the type of motion is the same as motion type of a current keyframe group held in memory, the method includes adding the received video frame to the current keyframe group. Conversely, when the type of motion is not the same motion type of the current keyframe group held in memory, the method includes creating a new keyframe group in memory and adding the received video frame to the new keyframe group.
    Type: Application
    Filed: November 3, 2014
    Publication date: May 7, 2015
    Inventors: Steffen Gauglitz, Christopher Michael Sweeney, Jonathan Daniel Ventura, Matthew Alan Turk, Tobias Höllerer