Patents by Inventor Marc André Léon Pollefeys
Marc André Léon Pollefeys has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230386086Abstract: A decoder receives a compressed red green blue depth, RGBD, frame of a video. The decoder accesses a reference RGBD frame. The decoder reprojects the reference RGBD frame using a depth channel of the reference RGBD frame and a camera pose, to compute a reprojected version of the reference frame. The decoder uses the reprojected version of the reference frame to decode the compressed RGBD frame.Type: ApplicationFiled: May 31, 2022Publication date: November 30, 2023Inventors: Marc Andre Leon POLLEFEYS, Dag Birger FROMMHOLD
-
Publication number: 20230375365Abstract: A method for collecting telemetry data for updating a 3D map of an environment comprising carrying out a first relocalization event whereby: data from an observation at time t1 by a pose tracker is used to compute a first 3D map pose of the pose tracker, and a first local pose of the pose tracker at time t1 is received. A second relocalization event occurs where a second 3D map pose of the pose tracker is computed from data from an observation at time and a second local pose of the pose tracker at time t2 is received. A first relative pose between the first and second 3D map pose is computed. A second relative pose between the first and second local pose is computed. A residual being a difference between the first relative pose and the second relative pose is stored as input to a process for updating the 3D map.Type: ApplicationFiled: May 23, 2022Publication date: November 23, 2023Inventors: Johannes Lutz SCHONBERGER, Marc Andre Leon POLLEFEYS
-
Publication number: 20230366696Abstract: A 3D map comprising sensor data items depicting the environment is updated, each sensor data item having one or more associated variables such as a pose of a capture device or a position of a landmark. A graph is calculated from sensor data items. The graph comprises nodes and edges, a node representing at least one variable in the received sensor data items and an edge representing relationships between variables. The graph is partitioned into a plurality of subgraphs so as to reduce a number of variables that are shared between subgraphs. Each of the plurality of subgraphs is allocated to a respective worker node. At each worker node, updated values of the variables are computed. The process updates values of variables which are shared between subgraphs to a common value using a consensus process. The 3D map of the environment is updated according to the updated values of the variables.Type: ApplicationFiled: May 12, 2022Publication date: November 16, 2023Inventors: Christoph VOGEL, Jan-Willem BUURLAGE, Johannes Lutz SCHONBERGER, Juan Ignacio Nieto COUADEAU, Marc Andre Leon POLLEFEYS, Timon Esli KNIGGE, Marcel Nicolas GEPPERT
-
Publication number: 20230281834Abstract: A method of tracking 3D position and orientation of an entity in a moving platform is described. The method comprises receiving data sensed by an inertial measurement unit mounted on the entity. Visual tracking data is also received, computed from images depicting the moving platform or the entity in the moving platform. The method computes the 3D position and orientation of the entity by estimating a plurality of states using the visual tracking data and the data sensed by the inertial measurement unit, where the states comprise both states of the moving platform and states of the entity.Type: ApplicationFiled: May 15, 2023Publication date: September 7, 2023Inventors: Joshua Aidan ELSDON, David John MCKINNON, Salim SIRTKAYA, Marc Andre Leon POLLEFEYS, Douglas Duane BERRETT, Yashar BAHMAN, Patrick Markus MISTELI
-
Patent number: 11688080Abstract: A method of tracking 3D position and orientation of an entity in a moving platform is described. The method comprises receiving data sensed by an inertial measurement unit mounted on the entity. Visual tracking data is also received, computed from images depicting the moving platform or the entity in the moving platform. The method computes the 3D position and orientation of the entity by estimating a plurality of states using the visual tracking data and the data sensed by the inertial measurement unit, where the states comprise both states of the moving platform and states of the entity.Type: GrantFiled: April 30, 2021Date of Patent: June 27, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Joshua Aidan Elsdon, David John McKinnon, Salim Sirtkaya, Marc Andre Leon Pollefeys, Douglas Duane Berrett, Yashar Bahman, Patrick Markus Misteli
-
Publication number: 20230179596Abstract: A method for authorizing access to one or more secured computer resources includes obfuscating a reference biometric vector into an obfuscated reference biometric vector using a similarity-preserving obfuscation. An authentication biometric vector is obfuscated into an obfuscated authentication biometric vector using the similarity-preserving obfuscation. A similarity of the obfuscated authentication biometric vector and the obfuscated reference biometric vector is tested. Based on the similarity being within an authentication threshold, access to the one or more secured computer resources is authorized.Type: ApplicationFiled: April 21, 2021Publication date: June 8, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Johannes Lutz SCHONBERGER, Marc Andre Leon POLLEFEYS
-
Patent number: 11546550Abstract: A method for video calling comprises, at a server computing system, receiving a plurality of segmented participant video streams from a plurality of client computing devices, each segmented participant video stream depicting a different human participant participating in a video call. One or more priority parameters for each of the plurality of human participants are recognized. One or more human participants are ranked based on a cumulative participant priority for each of the plurality of human participants. The plurality of segmented participant video streams are composited into a virtual conference view that displays each of the ranked one or more human participants at a virtual position based on their cumulative participant priority, such that human participants having higher cumulative participant priorities are displayed more prominently than human participants having lower cumulative participant priorities. The virtual conference view is sent to the plurality of client computing devices.Type: GrantFiled: July 12, 2021Date of Patent: January 3, 2023Assignee: Microsoft Technology Licensing, LLCInventor: Marc Andre Leon Pollefeys
-
Publication number: 20220375105Abstract: A method of tracking 3D position and orientation of an entity in a moving platform is described. The method comprises receiving data sensed by an inertial measurement unit mounted on the entity. Visual tracking data is also received, computed from images depicting the moving platform or the entity in the moving platform. The method computes the 3D position and orientation of the entity by estimating a plurality of states using the visual tracking data and the data sensed by the inertial measurement unit, where the states comprise both states of the moving platform and states of the entity.Type: ApplicationFiled: April 30, 2021Publication date: November 24, 2022Inventors: Joshua Aidan ELSDON, David John MCKINNON, Salim SIRTKAYA, Marc Andre Leon POLLEFEYS, Douglas Duane BERRETT, Yashar BAHMAN, Patrick Markus MISTELI
-
Patent number: 11328182Abstract: A three-dimensional (3D) map inconsistency detection machine includes an input transformation layer connected to a neural network. The input transformation layer is configured to 1) receive a test 3D map including 3D map data modeling a physical entity, 2) transform the 3D map data into a set of 2D images collectively corresponding to volumes of view frustums of a plurality of virtual camera views of the physical entity modeled by the test 3D map, and 3) output the set of 2D images to the neural network. The neural network is configured to output an inconsistency value indicating a degree to which the test 3D map includes inconsistencies based on analysis of the set of 2D images collectively corresponding to the volumes of the view frustums of the plurality of virtual camera views.Type: GrantFiled: June 9, 2020Date of Patent: May 10, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Lukas Gruber, Christoph Vogel, Ondrej Miksik, Marc Andre Leon Pollefeys
-
Publication number: 20220103784Abstract: A method for video calling comprises, at a server computing system, receiving a plurality of segmented participant video streams from a plurality of client computing devices, each segmented participant video stream depicting a different human participant participating in a video call. One or more priority parameters for each of the plurality of human participants are recognized. One or more human participants are ranked based on a cumulative participant priority for each of the plurality of human participants. The plurality of segmented participant video streams are composited into a virtual conference view that displays each of the ranked one or more human participants at a virtual position based on their cumulative participant priority, such that human participants having higher cumulative participant priorities are displayed more prominently than human participants having lower cumulative participant priorities. The virtual conference view is sent to the plurality of client computing devices.Type: ApplicationFiled: July 12, 2021Publication date: March 31, 2022Applicant: Microsoft Technology Licensing, LLCInventor: Marc Andre Leon POLLEFEYS
-
Publication number: 20210383172Abstract: A three-dimensional (3D) map inconsistency detection machine includes an input transformation layer connected to a neural network. The input transformation layer is configured to 1) receive a test 3D map including 3D map data modeling a physical entity, 2) transform the 3D map data into a set of 2D images collectively corresponding to volumes of view frustums of a plurality of virtual camera views of the physical entity modeled by the test 3D map, and 3) output the set of 2D images to the neural network. The neural network is configured to output an inconsistency value indicating a degree to which the test 3D map includes inconsistencies based on analysis of the set of 2D images collectively corresponding to the volumes of the view frustums of the plurality of virtual camera views.Type: ApplicationFiled: June 9, 2020Publication date: December 9, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Lukas GRUBER, Christoph VOGEL, Ondrej MIKSIK, Marc Andre Leon POLLEFEYS
-
Patent number: 11190904Abstract: To obtain a relative localization between a plurality of mobile devices, a first mobile device observes a second mobile device within a field of view of the first mobile device's camera at time t1, determines a first position of the first mobile device at t1, and receives from the second mobile device a second position of the second mobile device at t1. The first mobile device determines information about the first mobile device's orientation with respect to the second mobile device at t1 based at least in part on the first position and the observation of the second mobile device. The first mobile device identifies two constraints that relate the mobile devices' coordinate systems based at least in part on the second position and the orientation information. The first mobile device's pose relative to the second mobile device may be calculated once at least six constraints are accumulated.Type: GrantFiled: January 27, 2020Date of Patent: November 30, 2021Assignee: Microsoft Technology Licensing, LLCInventor: Marc Andre Leon Pollefeys
-
Patent number: 11145083Abstract: A method for image-based localization includes, at a camera device, capturing a plurality of images of a real-world environment. A first set of image features are detected in a first image of the plurality of images. Before additional sets of image features are detected in other images of the plurality, the first set of image features is transmitted to a remote device configured to estimate a pose of the camera device based on image features detected in the plurality of images. As the additional sets of image features are detected in the other images of the plurality, the additional sets of image features are transmitted to the remote device. An estimated pose of the camera device is received from the remote device.Type: GrantFiled: May 21, 2019Date of Patent: October 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Johannes Lutz Schonberger, Marc Andre Leon Pollefeys
-
Patent number: 11082661Abstract: A method for video calling comprises, at a server computing system, receiving a plurality of segmented participant video streams from a plurality of client computing devices, each segmented participant video stream depicting a different human participant participating in a video call. One or more priority parameters for each of the plurality of human participants are recognized. One or more human participants are ranked based on a cumulative participant priority for each of the plurality of human participants. The plurality of segmented participant video streams are composited into a virtual conference view that displays each of the ranked one or more human participants at a virtual position based on their cumulative participant priority, such that human participants having higher cumulative participant priorities are displayed more prominently than human participants having lower cumulative participant priorities. The virtual conference view is sent to the plurality of client computing devices.Type: GrantFiled: September 25, 2020Date of Patent: August 3, 2021Assignee: Microsoft Technology Licensing, LLCInventor: Marc Andre Leon Pollefeys
-
Patent number: 11004230Abstract: A data processing system is provided that includes a processor having associated memory, the processor being configured to execute instructions using portions of the memory to cause the processor to, at classification time, receive an input image frame from an image source. The input image frame includes an articulated object and a target object. The processor is further caused to process the input image frame using a trained neural network configured to, for each input cell of a plurality of input cells in the input image frame predict a three-dimensional articulated object pose of the articulated object and a three-dimensional target object pose of the target object relative to the input cell. The processor is further caused to output the three-dimensional articulated object pose and the three-dimensional target object pose from the neural network.Type: GrantFiled: March 22, 2019Date of Patent: May 11, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Marc Andre Leon Pollefeys, Bugra Tekin, Federica Bogo
-
Patent number: 10964053Abstract: Computing devices and methods for estimating a pose of a user computing device are provided. In one example a 3D map comprising a plurality of 3D points representing a physical environment is obtained. Each 3D point is transformed into a 3D line that passes through the point to generate a 3D line cloud. A query image of the environment captured by a user computing device is received, the query image comprising query features that correspond to the environment. Using the 3D line cloud and the query features, a pose of the user computing device with respect to the environment is estimated.Type: GrantFiled: July 2, 2018Date of Patent: March 30, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sudipta Narayan Sinha, Pablo Alejandro Speciale, Sing Bing Kang, Marc Andre Leon Pollefeys
-
Patent number: 10917568Abstract: Systems are provided for performing surface reconstruction with reduced power consumption. A surface mesh of an environment is generated, where the surface mesh is generated from multiple depth maps that are obtained of the environment. After the surface mesh is generated, a change detection image of that environment is captured while refraining from obtaining a new depth map of the environment. The change detection image is compared to the surface mesh. If a difference between the change detection image and the surface mesh is detected and if that difference satisfies a pre-determined difference threshold, then a new depth map of the environment is obtained. The surface mesh is then updated using the new depth map.Type: GrantFiled: December 28, 2018Date of Patent: February 9, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael Bleyer, Marc Andre Leon Pollefeys, Yuri Pekelny, Raymond Kirk Price
-
Patent number: 10878590Abstract: Stereo image reconstruction can be achieved by fusing a plurality of proposal cost volumes computed from a pair of stereo images, using a predictive model operating on pixelwise feature vectors that include disparity and cost values sparsely sampled form the proposal cost volumes to compute disparity estimates for the pixels within the image.Type: GrantFiled: May 25, 2018Date of Patent: December 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Sudipta Narayan Sinha, Marc André Léon Pollefeys, Johannes Lutz Schönberger
-
Publication number: 20200372672Abstract: A method for image-based localization includes, at a camera device, capturing a plurality of images of a real-world environment. A first set of image features are detected in a first image of the plurality of images. Before additional sets of image features are detected in other images of the plurality, the first set of image features is transmitted to a remote device configured to estimate a pose of the camera device based on image features detected in the plurality of images. As the additional sets of image features are detected in the other images of the plurality, the additional sets of image features are transmitted to the remote device. An estimated pose of the camera device is received from the remote device.Type: ApplicationFiled: May 21, 2019Publication date: November 26, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Johannes Lutz SCHONBERGER, Marc Andre Leon POLLEFEYS
-
Patent number: 10839556Abstract: A method for estimating a camera pose includes recognizing a three-dimensional (3D) map representing a physical environment, the 3D map including 3D map features defined as 3D points. An obfuscated image representation is received, the representation derived from an original unobfuscated image of the physical environment captured by a camera. The representation includes a plurality of obfuscated features, each including (i) a two-dimensional (2D) line that passes through a 2D point in the original unobfuscated image at which an image feature was detected, and (ii) a feature descriptor that describes the image feature associated with the 2D point that the 2D line of the obfuscated feature passes through. Correspondences are determined between the obfuscated features and the 3D map features of the 3D map of the physical environment. Based on the determined correspondences, a six degree of freedom pose of the camera in the physical environment is estimated.Type: GrantFiled: October 23, 2018Date of Patent: November 17, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Sudipta Narayan Sinha, Marc Andre Leon Pollefeys, Sing Bing Kang, Pablo Alejandro Speciale