Patents by Inventor Jakob Zillner

Jakob Zillner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961251
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for continuous surface and depth estimation. A continuous surface and depth estimation system determines the depth and surface normal of physical objects by using stereo vision limited within a predetermined window.
    Type: Grant
    Filed: May 18, 2022
    Date of Patent: April 16, 2024
    Assignee: SNAP INC.
    Inventors: Olha Borys, Ilteris Kaan Canberk, Daniel Wagner, Jakob Zillner
  • Publication number: 20240004197
    Abstract: Visual-inertial tracking of an eyewear device using sensors. The eyewear device monitors the sensors of a visual inertial odometry system (VIOS) that provide input for determining a position of the device within its environment. The eyewear device determines the status of the VIOS based information from the sensors and adjusts the plurality of sensors (e.g., by turning on/off sensors, changing the sampling rate, of a combination thereof) based on the determined status. The eyewear device then determines the position of the eyewear device within the environment using the adjusted plurality of sensors.
    Type: Application
    Filed: September 14, 2023
    Publication date: January 4, 2024
    Inventors: Olha Borys, Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
  • Publication number: 20230401796
    Abstract: A method for aligning coordinate systems from separate augmented reality (AR) devices is described. In one aspect, the method includes generating predicted depths of a first point cloud by applying a pre-trained model to a first single image generated by a first monocular camera of a first augmented reality (AR) device, and first sparse 3D points generated by a first SLAM system at the first AR device, generating predicted depths of a second point cloud by applying the pre-trained model to a second single image generated by a second monocular camera of the second AR device, and second sparse 3D points generated by a second SLAM system at the second AR device, determining a relative pose between the first AR device and the second AR device by registering the first point cloud with the second point cloud.
    Type: Application
    Filed: August 23, 2022
    Publication date: December 14, 2023
    Inventors: Georgios Evangelidis, Branislav Micusik, Jakob Zillner, Nathan Jacob Litke
  • Patent number: 11789266
    Abstract: Visual-inertial tracking of an eyewear device using sensors. The eyewear device monitors the sensors of a visual inertial odometry system (VIOS) that provide input for determining a position of the device within its environment. The eyewear device determines the status of the VIOS based information from the sensors and adjusts the plurality of sensors (e.g., by turning on/off sensors, changing the sampling rate, of a combination thereof) based on the determined status. The eyewear device then determines the position of the eyewear device within the environment using the adjusted plurality of sensors.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: October 17, 2023
    Assignee: Snap Inc.
    Inventors: Olha Borys, Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
  • Publication number: 20230205311
    Abstract: A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.
    Type: Application
    Filed: March 2, 2023
    Publication date: June 29, 2023
    Inventors: Georg Halmetschlager-Funek, Matthias kALKGRUBER, Daniel Wolf, Jakob Zillner
  • Publication number: 20230177708
    Abstract: A depth estimation system to perform operations that include: receiving image data generated by a client device, the image data comprising a depiction of an environment; identifying a set of image features based on the image data; determining a pose of the client device based on the set of features; generating a depth estimation based on the image data and the pose of the client device; and generating a mesh model of the environment based on the depth estimation.
    Type: Application
    Filed: December 5, 2022
    Publication date: June 8, 2023
    Inventors: Erick Mendez Mendez, Isac Andreas Müller Sandvik, Qi Pan, Edward James Rosten, Andrew Tristan Spek, Daniel Wagner, Jakob Zillner
  • Patent number: 11662805
    Abstract: A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: May 30, 2023
    Assignee: Snap Inc.
    Inventors: Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
  • Publication number: 20220377306
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for misaligned vantage point mitigation for computer stereo vision. A misaligned vantage point mitigation system determines whether vantage points of the optical sensors are misaligned from an expected vantage point and, if so, determines an adjustment variable to mitigate the misalignment based on the location of matching features identified in images captured by both optical sensors.
    Type: Application
    Filed: May 18, 2022
    Publication date: November 24, 2022
    Inventors: Olha Borys, Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
  • Publication number: 20220375112
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for continuous surface and depth estimation. A continuous surface and depth estimation system determines the depth and surface normal of physical objects by using stereo vision limited within a predetermined window.
    Type: Application
    Filed: May 18, 2022
    Publication date: November 24, 2022
    Inventors: Olha Borys, Ilteris Kaan Canberk, Daniel Wagner, Jakob Zillner
  • Publication number: 20220206565
    Abstract: A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.
    Type: Application
    Filed: April 9, 2021
    Publication date: June 30, 2022
    Inventors: Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
  • Patent number: 11082607
    Abstract: Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: August 3, 2021
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Luca Ricci, Daniel Wagner, Jeroen Diederik Hol, Jakob Zillner
  • Publication number: 20200322542
    Abstract: Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.
    Type: Application
    Filed: June 11, 2020
    Publication date: October 8, 2020
    Inventors: Luca Ricci, Daniel Wagner, Jeroen Diederik Hol, Jakob Zillner
  • Patent number: 10686980
    Abstract: Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 16, 2020
    Assignee: DAQRI, LLC
    Inventors: Luca Ricci, Daniel Wagner, Jeroen Diederik Hol, Jakob Zillner
  • Patent number: 10586395
    Abstract: A system and method for offloading object detection are described. A server receives first sensor data from a first sensor of an augmented reality (AR) display device. The first sensor data indicates a pose of the AR display device relative to a first reference coordinate system. The server detects a physical object using second sensor data received from a second sensor of the AR display device. The server determines, based on the second sensor data, a pose of the physical object relative to the AR display device. The server then determines the pose of the physical object relative to the first reference coordinate system based on the pose of the physical object relative to the AR display device and the pose of the AR display device relative to the first reference coordinate system.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: March 10, 2020
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Daniel Wolf, Jakob Zillner, Branislav Micusik, William Hoff
  • Publication number: 20180261012
    Abstract: A system and method for offloading object detection are described. A server receives first sensor data from a first sensor of an augmented reality (AR) display device. The first sensor data indicates a pose of the AR display device relative to a first reference coordinate system. The server detects a physical object using second sensor data received from a second sensor of the AR display device. The server determines, based on the second sensor data, a pose of the physical object relative to the AR display device. The server then determines the pose of the physical object relative to the first reference coordinate system based on the pose of the physical object relative to the AR display device and the pose of the AR display device relative to the first reference coordinate system.
    Type: Application
    Filed: May 9, 2018
    Publication date: September 13, 2018
    Inventors: Brian Mullins, Daniel Wolf, Jakob Zillner, Branislav Micusik, William Hoff