Patents by Inventor Jan-Michael Frahm

Jan-Michael Frahm has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160035139
    Abstract: Methods, systems, and computer readable media for low latency stabilization for head-worn displays are disclosed. According to one aspect, the subject matter described herein includes a system for low latency stabilization of a head-worn display. The system includes a low latency pose tracker having one or more rolling-shutter cameras that capture a 2D image by exposing each row of a frame at a later point in time than the previous row and that output image data row by row, and a tracking module for receiving image data row by row and using that data to generate a local appearance manifold. The generated manifold is used to track camera movements, which are used to produce a pose estimate.
    Type: Application
    Filed: March 13, 2014
    Publication date: February 4, 2016
    Inventors: Henry Fuchs, Anselmo A. Lastra, Jan-Michael Frahm, Nate Michael Dierk, David Paul Perra
  • Patent number: 9208612
    Abstract: Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
    Type: Grant
    Filed: February 11, 2011
    Date of Patent: December 8, 2015
    Assignees: The University of North Carolina at Chapel Hill, Eidgenossische Technische Hochschule Zurich
    Inventors: Jan-Michael Frahm, Marc Andre Leon Pollefeys, David Robert Gallup
  • Patent number: 9196084
    Abstract: Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that is carried by a human user as he or she passes around some or all of the object. During the acquisition of a series of digital images of an object of interest, various types of user feedback may be provided to a human user operator of the mobile device, and particular images may be selected for further analysis in various manners. Furthermore, the calculation of object volume and/or other determined object information may include generating and manipulating a computer model or other representation of the object from selected images.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 24, 2015
    Assignee: URC Ventures Inc.
    Inventors: David Boardman, Charles Erignac, Srinivas Kapaganty, Jan-Michael Frahm, Ben Semerjian
  • Publication number: 20150138069
    Abstract: Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display are disclosed. According to one aspect, a system for unified scene acquisition and pose tracking in a wearable display includes a wearable frame configured to be worn by a user. Mounted on the frame are: at least one sensor for acquiring scene information for a real scene proximate to the user, the scene information including images and depth information; a pose tracker for estimating the user's head pose based on the acquired scene information; a rendering unit for generating a virtual reality (VR) image based on the acquired scene information and estimated head pose; and at least one display for displaying to the user a combination of the generated VR image and the scene proximate to the user.
    Type: Application
    Filed: May 17, 2013
    Publication date: May 21, 2015
    Applicant: The University of North Carolina at Chapel Hill
    Inventors: Henry Fuchs, Mingsong Dou, Gregory Welch, Jan-Michael Frahm
  • Patent number: 8913055
    Abstract: A system and method are disclosed for online mapping of large-scale environments using a hybrid representation of a metric Euclidean environment map and a topological map. The system includes a scene module, a location recognition module, a local adjustment module and a global adjustment module. The scene flow module is for detecting and tracking video features of the frames of an input video sequence. The scene flow module is also configured to identify multiple keyframes of the input video sequence and add the identified keyframes into an initial environment map of the input video sequence. The location recognition module is for detecting loop closures in the environment map. The local adjustment module enforces local metric properties of the keyframes in the environment map, and the global adjustment module is for optimizing the entire environment map subject to global metric properties of the keyframes in the keyframe pose graph.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: December 16, 2014
    Assignees: Honda Motor Co., Ltd., The University of North Carolina at Chapel Hill, ETH Zurich
    Inventors: Jongwoo Lim, Jan-Michael Frahm, Marc Pollefeys
  • Patent number: 8896655
    Abstract: A method is provided in one example and includes capturing panoramic image data through a first camera in a camera cluster, and capturing close-up image data through a second camera included as part of a spaced array of cameras. The presence of a user in a field of view of the second camera can be detected. The close-up image data and the panoramic image data can be combined to form a combined image. In more specific embodiments, the detecting includes evaluating a distance between the user and the second camera. The combined image can reflect a removal of a portion of panoramic image data associated with the user in a video conferencing environment.
    Type: Grant
    Filed: August 31, 2010
    Date of Patent: November 25, 2014
    Assignees: Cisco Technology, Inc., University of North Carolina at Chapel Hill
    Inventors: J. William Mauchly, Madhav V. Marathe, Henry Fuchs, Jan-Michael Frahm
  • Publication number: 20130060540
    Abstract: Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
    Type: Application
    Filed: February 11, 2011
    Publication date: March 7, 2013
    Applicants: EIDGENOSSISCHE TEHNISCHE HOCHSCHULE ZURICH, THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
    Inventors: Jan-Michael Frahm, Marc Andre Leon Pollefeys, David Robert Gallup
  • Publication number: 20120306847
    Abstract: A system and method are disclosed for online mapping of large-scale environments using a hybrid representation of a metric Euclidean environment map and a topological map. The system includes a scene module, a location recognition module, a local adjustment module and a global adjustment module. The scene flow module is for detecting and tracking video features of the frames of an input video sequence. The scene flow module is also configured to identify multiple keyframes of the input video sequence and add the identified keyframes into an initial environment map of the input video sequence. The location recognition module is for detecting loop closures in the environment map. The local adjustment module enforces local metric properties of the keyframes in the environment map, and the global adjustment module is for optimizing the entire environment map subject to global metric properties of the keyframes in the keyframe pose graph.
    Type: Application
    Filed: May 30, 2012
    Publication date: December 6, 2012
    Applicant: Honda Motor Co., Ltd.
    Inventors: Jongwoo Lim, Jan-Michael Frahm, Marc Pollefeys
  • Publication number: 20120050458
    Abstract: A method is provided in one example and includes capturing panoramic image data through a first camera in a camera cluster, and capturing close-up image data through a second camera included as part of a spaced array of cameras. The presence of a user in a field of view of the second camera can be detected. The close-up image data and the panoramic image data can be combined to form a combined image. In more specific embodiments, the detecting includes evaluating a distance between the user and the second camera. The combined image can reflect a removal of a portion of panoramic image data associated with the user in a video conferencing environment.
    Type: Application
    Filed: August 31, 2010
    Publication date: March 1, 2012
    Inventors: J. William Mauchly, Madhav V. Marathe, Henry Fuchs, Jan-Michael Frahm
  • Patent number: 7592997
    Abstract: The invention relates to a system for determining the position of a user and/or a moving device by means of tracking methods, in particular for augmented reality applications, with an interface (9) to integrate at least one sensor type and/or data generator (1, 2, 3, 4) of a tracking method, a configuration unit (20) to describe communication between the tracking methods and/or tracking algorithms and at least one processing unit (5, 6, 7, 8, 10, 11, 12, 13, 16) to calculate the position of the user and/or the moving device based on the data supplied by the tracking methods and/or tracking algorithms.
    Type: Grant
    Filed: June 1, 2005
    Date of Patent: September 22, 2009
    Assignee: Siemens Aktiengesellschaft
    Inventors: Jan-Friso Evers-Senne, Jan-Michael Frahm, Mehdi Hamadou, Dirk Jahn, Peter Georg Meier, Juri Platonov, Didier Stricker, Jens Weidenhausen
  • Publication number: 20050275722
    Abstract: The invention relates to a system for determining the position of a user and/or a moving device by means of tracking methods, in particular for augmented reality applications, with an interface (9) to integrate at least one sensor type and/or data generator (1, 2, 3, 4) of a tracking method, a configuration unit (20) to describe communication between the tracking methods and/or tracking algorithms and at least one processing unit (5, 6, 7, 8, 10, 11, 12, 13, 16) to calculate the position of the user and/or the moving device based on the data supplied by the tracking methods and/or tracking algorithms.
    Type: Application
    Filed: June 1, 2005
    Publication date: December 15, 2005
    Inventors: Jan-Friso Evers-Senne, Jan-Michael Frahm, Mehdi Hamadou, Dirk Jahn, Peter Meier, Juri Platonov, Didier Stricker, Jens Weidenhausen