Patents by Inventor Jan-Michael Frahm

Jan-Michael Frahm has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062472
    Abstract: In one embodiment, a method includes scanning a real-world environment with a first device associated with a first user; generating a three-dimensional model of the real-world environment, transmitting the three-dimensional model to a head-mounted device associated with the first user, determining a pose of the head-mounted device by localizing the head-mounted device within the three-dimensional model based on images captured by a second camera of the head-mounted device, displaying, on the head-mounted device, a virtual space corresponding to the scanned real-world environment generated based on the three-dimensional model as viewed from the pose, and transmitting, to a remote head-mounted device of a second user, data corresponding to the three-dimensional model and the pose of the head-mounted device, the data being configured for rendering, by the remote head-mounted device, the virtual space with a first avatar corresponding to the first user having the pose.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Jan Herling, Nils Plath, Jan-Michael Frahm, Ahmad Al-Dahle
  • Patent number: 10733745
    Abstract: Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video are disclosed. According to one method for deriving a 3D textured surface from endoscopic video, the method comprises: performing video frame preprocessing to identify a plurality of video frames of an endoscopic video, wherein the video frame preprocessing includes informative frame selection, specularity removal, and key-frame selection; generating, using a neural network or a shape-from-motion-and-shading (SfMS) approach, a 3D textured surface from the plurality of video frames; and optionally registering the 3D textured surface to at least one CT image.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: August 4, 2020
    Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
    Inventors: Stephen Murray Pizer, Jan-Michael Frahm, Julian Gary Rosenman, Qingyu Zhao, Rui Wang, Ruibin Ma, James True Price, Miao Fan, Sarah Kelly McGill
  • Publication number: 20200219272
    Abstract: Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video are disclosed. According to one method for deriving a 3D textured surface from endoscopic video, the method comprises: performing video frame preprocessing to identify a plurality of video frames of an endoscopic video, wherein the video frame preprocessing includes informative frame selection, specularity removal, and key-frame selection; generating, using a neural network or a shape-from-motion-and-shading (SfMS) approach, a 3D textured surface from the plurality of video frames; and optionally registering the 3D textured surface to at least one CT image.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 9, 2020
    Inventors: Stephen Murray Pizer, Jan-Michael Frahm, Julian Gary Rosenman, Qingyu Zhao, Rui Wang, Ruibin Ma, James True Price, Miao Fan, Sarah Kelly McGill
  • Patent number: 10682108
    Abstract: Methods, systems, and computer readable media for deriving a three-dimensional (3D) surface from colonoscopic video are disclosed. According to one method for deriving a 3D surface from colonoscopic video, the method comprises: performing video frame preprocessing to identify a plurality of keyframes of a colonoscopic video, wherein the video frame preprocessing includes informative frame selection and keyframe selection; generating, using a recurrent neural network and direct sparse odometry, camera poses and depth maps for the keyframes; and fusing, using SurfelMeshing and the camera poses, the depth maps into a three-dimensional (3D) surface of a colon portion, wherein the 3D surface indicates at least one region of the colon portion that was not visualized.
    Type: Grant
    Filed: July 16, 2019
    Date of Patent: June 16, 2020
    Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
    Inventors: Ruibin Ma, Rui Wang, Stephen Murray Pizer, Jan-Michael Frahm, Julian Gary Rosenman, Sarah Kelly McGill
  • Patent number: 10614580
    Abstract: Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video are disclosed. According to one method for deriving a 3D textured surface from endoscopic video, the method comprises: performing video frame preprocessing to identify a plurality of video frames of an endoscopic video, wherein the video frame preprocessing includes informative frame selection, specularity removal, and key-frame selection; generating, using a neural network or a shape-from-motion-and-shading (SfMS) approach, a 3D textured surface from the plurality of video frames; and optionally registering the 3D textured surface to at least one CT image.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: April 7, 2020
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Stephen Murray Pizer, Jan-Michael Frahm, Julian Gary Rosenman, Qingyu Zhao, Rui Wang, Ruibin Ma, James True Price, Miao Fan, Sarah Kelly McGill
  • Patent number: 10504000
    Abstract: Methods, systems, and computer readable media for image overlap detection. An example method includes identifying, by one or more computers, a collection of images; streaming, by the one or more computers, each image from the collection of images so that, in one or a limited number of passes through the collection of images, each image is loaded only once from an input source and each image is discarded after a processing time for the image is exceeded; and during the streaming, for each image in at least a first subset of the images in the collection, determining whether the image overlaps with at least one other image in the at least a first subset of the images.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: December 10, 2019
    Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
    Inventors: Jared Scott Heinly, Johannes Lutz Schoenberger, Enrique Dunn, Jan-Michael Frahm
  • Patent number: 10410372
    Abstract: Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration are disclosed. According to one aspect, the method includes receiving, from each of a plurality of camera devices, an input pixel row of a radially distorted image and conducting a row comparison between each of the input pixel rows and a respectively associated synthesized pixel row. The method further includes approximating, for each row comparison, a span of a curve in an image space with a plurality of segments and computing, for each of the plurality of segments, a constraint. The method also includes utilizing the constraints to estimate a pose configuration.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: September 10, 2019
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Akash Abhijit Bapat, James True Price, Jan-Michael Frahm
  • Patent number: 10403037
    Abstract: Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images and/or to perform automated verification of such attribute measurements. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that is moved around some or all of the object. The calculation of object volume and/or other determined object information may include generating and manipulating one or more computer models of the object from selected images. In addition, further automated verification activities may be performed for such computer model(s) and resulting object attribute measurements, such as based on analyzing one or more types of information that reflect accuracy and/or completeness of the computer model(s).
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: September 3, 2019
    Assignee: URC Ventures, Inc.
    Inventors: David Boardman, Brian Sanderson Clipp, Charles Erignac, Jan-Michael Frahm, Jared Scott Heinly, Srinivas Kapaganty
  • Patent number: 10365711
    Abstract: Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display are disclosed. According to one aspect, a system for unified scene acquisition and pose tracking in a wearable display includes a wearable frame configured to be worn by a user. Mounted on the frame are: at least one sensor for acquiring scene information for a real scene proximate to the user, the scene information including images and depth information; a pose tracker for estimating the user's head pose based on the acquired scene information; a rendering unit for generating a virtual reality (VR) image based on the acquired scene information and estimated head pose; and at least one display for displaying to the user a combination of the generated VR image and the scene proximate to the user.
    Type: Grant
    Filed: May 17, 2013
    Date of Patent: July 30, 2019
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Henry Fuchs, Mingsong Dou, Gregory Welch, Jan-Michael Frahm
  • Patent number: 10218987
    Abstract: Methods, systems, and computer readable media for performing image compression are disclosed. According to one exemplary method, the method includes identifying a canonical image set from a plurality of images uploaded to or existing on a cloud computing and/or a storage environment. The method also includes computing an image representation for each image in the canonical image set. The method further includes receiving a first image. The method also includes identifying, using the image representations for the canonical image set, one or more reference images that are visually similar to the first image. The method further includes compressing the first image using the one or more reference images.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: February 26, 2019
    Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
    Inventors: Jan-Michael Frahm, David Paul Perra
  • Patent number: 10186049
    Abstract: Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images, as well as determine changes over time in objects and their measurements based on images acquired at different times. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that moves around some or all of the object. The calculation of object volume and/or other determined object information may include generating and manipulating one or more computer models of the object from selected images, and determining changes may include comparing different models for different times. In addition, further automated activities may include displaying, presenting or otherwise providing information about some or all of the determined information.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: January 22, 2019
    Assignee: URC Ventures, Inc.
    Inventors: David Boardman, Brian Sanderson Clipp, Charles Erignac, Jan-Michael Frahm, Jared Scott Heinly, Anthony James Jacobson, Srinivas Kapaganty
  • Patent number: 10165176
    Abstract: A system for leveraging user gaze in a user monitoring subregion selection system includes a first camera configured to capture an image of a scene. A pattern generator is configured to generate and project a pattern onto a surface of at least one of a user's eyes. A sensor is configured to obtain an image of the pattern reflected from at least one of the user's eyes. In an alternate implementation, the pattern generator may be omitted, and the sensor may be a stereo user facing camera. A gaze estimation and scene mapping module is configured to estimate a gaze direction of the user using the image captured by the user facing sensor and to map the estimated gaze direction to the image of the scene based on a location of an object of interest within the scene. A subregion selection module is configured to select a subregion of the image of the scene based on the mapped user gaze direction.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: December 25, 2018
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Jan-Michael Frahm, David Paul Perra
  • Publication number: 20180082147
    Abstract: Methods, systems, and computer readable media for image overlap detection. An example method includes identifying, by one or more computers, a collection of images; streaming, by the one or more computers, each image from the collection of images so that, in one or a limited number of passes through the collection of images, each image is loaded only once from an input source and each image is discarded after a processing time for the image is exceeded; and during the streaming, for each image in at least a first subset of the images in the collection, determining whether the image overlaps with at least one other image in the at least a first subset of the images.
    Type: Application
    Filed: March 24, 2016
    Publication date: March 22, 2018
    Inventors: Jared Scott Heinly, Johannes Lutz Schoenberger, Enrique Dunn, Jan-Michael Frahm
  • Patent number: 9906884
    Abstract: Methods, systems, and computer readable media for utilizing adaptive rectangular decomposition (ARD) to perform head-related transfer function (HRTF) simulations are disclosed herein. According to one method, the method includes obtaining a mesh model representative of head and ear geometry of a listener entity and segmenting a simulation domain of the mesh model into a plurality of partitions. The method further includes conducting an ARD simulation on the plurality of partitions to generate simulated sound pressure signals within each of the plurality of partitions and processing the simulated sound pressure signals to generate at least one HRTF that is customized for the listener entity.
    Type: Grant
    Filed: August 1, 2016
    Date of Patent: February 27, 2018
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Alok Namdeo Meshram, Dinesh Manocha, Ravish Mehra, Enrique Dunn, Jan-Michael Frahm, Hongsheng Yang
  • Patent number: 9898866
    Abstract: Methods, systems, and computer readable media for low latency stabilization for head-worn displays are disclosed. According to one aspect, the subject matter described herein includes a system for low latency stabilization of a head-worn display. The system includes a low latency pose tracker having one or more rolling-shutter cameras that capture a 2D image by exposing each row of a frame at a later point in time than the previous row and that output image data row by row, and a tracking module for receiving image data row by row and using that data to generate a local appearance manifold. The generated manifold is used to track camera movements, which are used to produce a pose estimate.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: February 20, 2018
    Assignee: The University of North Carolina at Chapel Hill
    Inventors: Henry Fuchs, Anselmo A. Lastra, Jan-Michael Frahm, Nate Michael Dierk, David Paul Perra
  • Publication number: 20170195678
    Abstract: Methods, systems, and computer readable media for performing image compression are disclosed. According to one exemplary method, the method includes identifying a canonical image set from a plurality of images uploaded to or existing on a cloud computing and/or a storage environment. The method also includes computing an image representation for each image in the canonical image set. The method further includes receiving a first image. The method also includes identifying, using the image representations for the canonical image set, one or more reference images that are visually similar to the first image. The method further includes compressing the first image using the one or more reference images.
    Type: Application
    Filed: June 30, 2015
    Publication date: July 6, 2017
    Inventors: Jan-Michael Frahm, David Paul Perra
  • Patent number: 9495764
    Abstract: Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images and/or to perform automated verification of such attribute measurements. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that is moved around some or all of the object. The calculation of object volume and/or other determined object information may include generating and manipulating one or more computer models of the object from selected images. In addition, further automated verification activities may be performed for such computer model(s) and resulting object attribute measurements, such as based on analyzing one or more types of information that reflect accuracy and/or completeness of the computer model(s).
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: November 15, 2016
    Assignee: URC Ventures, Inc.
    Inventors: David Boardman, Brian Sanderson Clipp, Charles Erignac, Jan-Michael Frahm, Jared Scott Heinly, Srinivas Kapaganty
  • Publication number: 20160309081
    Abstract: The subject matter described herein relates to methods, systems, and computer readable media for leveraging user gaze in a user monitoring subregion selection system. One system includes a first camera configured to capture an image of a scene. In one implementation, the system includes a pattern generator configured to generate and project a pattern onto a surface of at least one of a user's eyes. The system further includes a sensor configured to obtain an image of the pattern reflected from at least one of the user's eyes. In an alternate implementation, the pattern generator may be omitted, and the sensor may be a stereo user facing camera. The system further includes a gaze estimation and scene mapping module configured to estimate a gaze direction of the user using the image captured by the user facing sensor and to map the estimated gaze direction to the image of the scene based on a location of an object of interest within the scene.
    Type: Application
    Filed: October 31, 2014
    Publication date: October 20, 2016
    Inventors: Jan-Michael FRAHM, David Paul PERRA
  • Patent number: 9367921
    Abstract: Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that is carried by a human user as he or she passes around some or all of the object. During the acquisition of a series of digital images of an object of interest, various types of user feedback may be provided to a human user operator of the mobile device, and particular images may be selected for further analysis in various manners. Furthermore, the calculation of object volume and/or other determined object information may include generating and manipulating a computer model or other representation of the object from selected images.
    Type: Grant
    Filed: October 20, 2015
    Date of Patent: June 14, 2016
    Assignee: URC Ventures, Inc.
    Inventors: David Boardman, Charles Erignac, Srinivas Kapaganty, Jan-Michael Frahm, Ben Semerjian
  • Publication number: 20160042521
    Abstract: Techniques are described for analyzing images acquired via mobile devices in various ways, including to estimate measurements for one or more attributes of one or more objects in the images. For example, the described techniques may be used to measure the volume of a stockpile of material or other large object, based on images acquired via a mobile device that is carried by a human user as he or she passes around some or all of the object. During the acquisition of a series of digital images of an object of interest, various types of user feedback may be provided to a human user operator of the mobile device, and particular images may be selected for further analysis in various manners. Furthermore, the calculation of object volume and/or other determined object information may include generating and manipulating a computer model or other representation of the object from selected images.
    Type: Application
    Filed: October 20, 2015
    Publication date: February 11, 2016
    Inventors: David Boardman, Charles Erignac, Srinivas Kapaganty, Jan-Michael Frahm, Ben Semerjian