Patents by Inventor Jakob Zillner
Jakob Zillner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961251Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for continuous surface and depth estimation. A continuous surface and depth estimation system determines the depth and surface normal of physical objects by using stereo vision limited within a predetermined window.Type: GrantFiled: May 18, 2022Date of Patent: April 16, 2024Assignee: SNAP INC.Inventors: Olha Borys, Ilteris Kaan Canberk, Daniel Wagner, Jakob Zillner
-
Publication number: 20240004197Abstract: Visual-inertial tracking of an eyewear device using sensors. The eyewear device monitors the sensors of a visual inertial odometry system (VIOS) that provide input for determining a position of the device within its environment. The eyewear device determines the status of the VIOS based information from the sensors and adjusts the plurality of sensors (e.g., by turning on/off sensors, changing the sampling rate, of a combination thereof) based on the determined status. The eyewear device then determines the position of the eyewear device within the environment using the adjusted plurality of sensors.Type: ApplicationFiled: September 14, 2023Publication date: January 4, 2024Inventors: Olha Borys, Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
-
Publication number: 20230401796Abstract: A method for aligning coordinate systems from separate augmented reality (AR) devices is described. In one aspect, the method includes generating predicted depths of a first point cloud by applying a pre-trained model to a first single image generated by a first monocular camera of a first augmented reality (AR) device, and first sparse 3D points generated by a first SLAM system at the first AR device, generating predicted depths of a second point cloud by applying the pre-trained model to a second single image generated by a second monocular camera of the second AR device, and second sparse 3D points generated by a second SLAM system at the second AR device, determining a relative pose between the first AR device and the second AR device by registering the first point cloud with the second point cloud.Type: ApplicationFiled: August 23, 2022Publication date: December 14, 2023Inventors: Georgios Evangelidis, Branislav Micusik, Jakob Zillner, Nathan Jacob Litke
-
Patent number: 11789266Abstract: Visual-inertial tracking of an eyewear device using sensors. The eyewear device monitors the sensors of a visual inertial odometry system (VIOS) that provide input for determining a position of the device within its environment. The eyewear device determines the status of the VIOS based information from the sensors and adjusts the plurality of sensors (e.g., by turning on/off sensors, changing the sampling rate, of a combination thereof) based on the determined status. The eyewear device then determines the position of the eyewear device within the environment using the adjusted plurality of sensors.Type: GrantFiled: December 15, 2020Date of Patent: October 17, 2023Assignee: Snap Inc.Inventors: Olha Borys, Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
-
Publication number: 20230205311Abstract: A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.Type: ApplicationFiled: March 2, 2023Publication date: June 29, 2023Inventors: Georg Halmetschlager-Funek, Matthias kALKGRUBER, Daniel Wolf, Jakob Zillner
-
Publication number: 20230177708Abstract: A depth estimation system to perform operations that include: receiving image data generated by a client device, the image data comprising a depiction of an environment; identifying a set of image features based on the image data; determining a pose of the client device based on the set of features; generating a depth estimation based on the image data and the pose of the client device; and generating a mesh model of the environment based on the depth estimation.Type: ApplicationFiled: December 5, 2022Publication date: June 8, 2023Inventors: Erick Mendez Mendez, Isac Andreas Müller Sandvik, Qi Pan, Edward James Rosten, Andrew Tristan Spek, Daniel Wagner, Jakob Zillner
-
Patent number: 11662805Abstract: A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.Type: GrantFiled: April 9, 2021Date of Patent: May 30, 2023Assignee: Snap Inc.Inventors: Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
-
Publication number: 20220377306Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for misaligned vantage point mitigation for computer stereo vision. A misaligned vantage point mitigation system determines whether vantage points of the optical sensors are misaligned from an expected vantage point and, if so, determines an adjustment variable to mitigate the misalignment based on the location of matching features identified in images captured by both optical sensors.Type: ApplicationFiled: May 18, 2022Publication date: November 24, 2022Inventors: Olha Borys, Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
-
Publication number: 20220375112Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for continuous surface and depth estimation. A continuous surface and depth estimation system determines the depth and surface normal of physical objects by using stereo vision limited within a predetermined window.Type: ApplicationFiled: May 18, 2022Publication date: November 24, 2022Inventors: Olha Borys, Ilteris Kaan Canberk, Daniel Wagner, Jakob Zillner
-
Publication number: 20220206565Abstract: A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.Type: ApplicationFiled: April 9, 2021Publication date: June 30, 2022Inventors: Georg Halmetschlager-Funek, Matthias Kalkgruber, Daniel Wolf, Jakob Zillner
-
Patent number: 11082607Abstract: Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.Type: GrantFiled: June 11, 2020Date of Patent: August 3, 2021Assignee: FACEBOOK TECHNOLOGIES, LLCInventors: Luca Ricci, Daniel Wagner, Jeroen Diederik Hol, Jakob Zillner
-
Publication number: 20200322542Abstract: Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.Type: ApplicationFiled: June 11, 2020Publication date: October 8, 2020Inventors: Luca Ricci, Daniel Wagner, Jeroen Diederik Hol, Jakob Zillner
-
Patent number: 10686980Abstract: Systems and methods for generating composite depth images are disclosed. Exemplary implementations may: capture, by a depth sensor, a set of depth images over a capture period of time; generate, by an inertial sensor, inertial signals that convey values of one or more inertial parameters characterizing motion of the depth sensor during the capture period of time; select a target capture position based on one or more of the capture positions of the set of depth images; generate, using the values of the one or more inertial parameters during the capture period of time, re-projected depth images; and generate a composite depth image by combining multiple depth images, such multiple depth images including a first re-projected depth image and a second re-projected depth image.Type: GrantFiled: January 22, 2019Date of Patent: June 16, 2020Assignee: DAQRI, LLCInventors: Luca Ricci, Daniel Wagner, Jeroen Diederik Hol, Jakob Zillner
-
Patent number: 10586395Abstract: A system and method for offloading object detection are described. A server receives first sensor data from a first sensor of an augmented reality (AR) display device. The first sensor data indicates a pose of the AR display device relative to a first reference coordinate system. The server detects a physical object using second sensor data received from a second sensor of the AR display device. The server determines, based on the second sensor data, a pose of the physical object relative to the AR display device. The server then determines the pose of the physical object relative to the first reference coordinate system based on the pose of the physical object relative to the AR display device and the pose of the AR display device relative to the first reference coordinate system.Type: GrantFiled: May 9, 2018Date of Patent: March 10, 2020Assignee: DAQRI, LLCInventors: Brian Mullins, Daniel Wolf, Jakob Zillner, Branislav Micusik, William Hoff
-
Publication number: 20180261012Abstract: A system and method for offloading object detection are described. A server receives first sensor data from a first sensor of an augmented reality (AR) display device. The first sensor data indicates a pose of the AR display device relative to a first reference coordinate system. The server detects a physical object using second sensor data received from a second sensor of the AR display device. The server determines, based on the second sensor data, a pose of the physical object relative to the AR display device. The server then determines the pose of the physical object relative to the first reference coordinate system based on the pose of the physical object relative to the AR display device and the pose of the AR display device relative to the first reference coordinate system.Type: ApplicationFiled: May 9, 2018Publication date: September 13, 2018Inventors: Brian Mullins, Daniel Wolf, Jakob Zillner, Branislav Micusik, William Hoff