Patents by Inventor Szymon P. Stachniak
Szymon P. Stachniak has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11176374Abstract: The described implementations relate to images and depth information and generating useful information from the images and depth information. One example can identify planes in a semantically-labeled 3D voxel representation of a scene. The example can infer missing information by extending planes associated with structural elements of the scene. The example can also generate a watertight manifold representation of the scene at least in part from the inferred missing information.Type: GrantFiled: May 1, 2019Date of Patent: November 16, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Michelle Brook, William Guyman, Szymon P. Stachniak, Hendrik M. Langerak, Silvano Galliani, Marc Pollefeys
-
Patent number: 11017595Abstract: Improved techniques for performing object segmentation are disclosed. Surface reconstruction (SR) data corresponding to an environment is accessed. This SR data is used to generate a detailed three-dimensional (3D) representation of the environment. The SR data is also used to infer a high-level 3D structural representation of the environment. The high-level 3D structural representation is inferred using machine learning that is performed on the surface reconstruction data to identify a structure of the environment. The high-level 3D structural representation is then cut from the detailed 3D representation. This cutting process generates a clutter mesh comprising objects that remain after the cut and that are distinct from the structure. Object segmentation is then performed on the remaining objects to identify those objects.Type: GrantFiled: October 29, 2019Date of Patent: May 25, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Yuri Pekelny, Rahul Sawhney, Muhammad Jabir Kapasi, Szymon P. Stachniak, Michelle Lynn Brook
-
Publication number: 20210125407Abstract: Improved techniques for performing object segmentation are disclosed. Surface reconstruction (SR) data corresponding to an environment is accessed. This SR data is used to generate a detailed three-dimensional (3D) representation of the environment. The SR data is also used to infer a high-level 3D structural representation of the environment. The high-level 3D structural representation is inferred using machine learning that is performed on the surface reconstruction data to identify a structure of the environment. The high-level 3D structural representation is then cut from the detailed 3D representation. This cutting process generates a clutter mesh comprising objects that remain after the cut and that are distinct from the structure. Object segmentation is then performed on the remaining objects to identify those objects.Type: ApplicationFiled: October 29, 2019Publication date: April 29, 2021Inventors: Yuri Pekelny, Rahul Sawhney, Muhammad Jabir Kapasi, Szymon P. Stachniak, Michelle Lynn Brook
-
Publication number: 20200349351Abstract: The described implementations relate to images and depth information and generating useful information from the images and depth information. One example can identify planes in a semantically-labeled 3D voxel representation of a scene. The example can infer missing information by extending planes associated with structural elements of the scene. The example can also generate a watertight manifold representation of the scene at least in part from the inferred missing information.Type: ApplicationFiled: May 1, 2019Publication date: November 5, 2020Inventors: Michelle BROOK, William GUYMAN, Szymon P. STACHNIAK, Hendrik M. LANGERAK, Silvano GALLIANI, Marc POLLEFEYS
-
Patent number: 10630965Abstract: Examples are disclosed herein that relate to calibrating a user's eye for a stereoscopic display. One example provides, on a head-mounted display device including a see-through display, a method of calibrating a stereoscopic display for a user's eyes, the method including for a first eye, receiving an indication of alignment of a user-controlled object with a first eye reference object viewable via the head-mounted display device from a perspective of the first eye, determining a first ray intersecting the user-controlled object and the first eye reference object from the perspective of the first eye, and determining a position of the first eye based on the first ray. The method further includes repeating such steps for a second eye, determining a position of the second eye based on a second ray, and calibrating the stereoscopic display based on the position of the first eye and the position of the second eye.Type: GrantFiled: October 2, 2015Date of Patent: April 21, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Robert Thomas Held, Anatolie Gavriliuc, Riccardo Giraldi, Szymon P. Stachniak, Andrew Frederick Muehlhausen, Maxime Ouellet
-
Publication number: 20170099481Abstract: Examples are disclosed herein that relate to calibrating a user's eye for a stereoscopic display. One example provides, on a head-mounted display device including a see-through display, a method of calibrating a stereoscopic display for a user's eyes, the method including for a first eye, receiving an indication of alignment of a user-controlled object with a first eye reference object viewable via the head-mounted display device from a perspective of the first eye, determining a first ray intersecting the user-controlled object and the first eye reference object from the perspective of the first eye, and determining a position of the first eye based on the first ray. The method further includes repeating such steps for a second eye, determining a position of the second eye based on a second ray, and calibrating the stereoscopic display based on the position of the first eye and the position of the second eye.Type: ApplicationFiled: October 2, 2015Publication date: April 6, 2017Inventors: Robert Thomas Held, Anatolie Gavriliuc, Riccardo Giraldi, Szymon P. Stachniak, Andrew Frederick Muehlhausen, Maxime Ouellet
-
Publication number: 20150138078Abstract: Pose and gesture detection and classification of a human poses and gestures using a discriminative ferns ensemble classifier is provided. Sample image data in one or more channels includes a human image. A processing device operates on the sample image data using the discriminative ferns ensemble classifier. The classifier has set of classification tables and matching bit features (ferns) which are developed using a first set of training data and optimized by a weighting of the tables using an SVM linear classifier configured based on the first or a second set of pose training data. The tables allow computation of a score per pose class for the image in the sample data and the processor outputs a determination of the pose in the sample depth image data. The determination enables the manipulation of a natural user interface.Type: ApplicationFiled: November 18, 2014Publication date: May 21, 2015Inventors: Eyal Krupka, Alon Vinnikov, Benjamin Eliot Klein, Szymon P. Stachniak
-
Patent number: 8866889Abstract: A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods.Type: GrantFiled: November 3, 2010Date of Patent: October 21, 2014Assignee: Microsoft CorporationInventors: Prafulla J. Masalkar, Szymon P. Stachniak, Tommer Leyvand, Zhengyou Zhang, Leonardo Del Castillo, Zsolt Mathe
-
Patent number: 8775916Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.Type: GrantFiled: May 17, 2013Date of Patent: July 8, 2014Assignee: Microsoft CorporationInventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Forbes, Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper, Michael Johnson
-
Publication number: 20130251204Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.Type: ApplicationFiled: May 17, 2013Publication date: September 26, 2013Applicant: MICROSOFT CORPORATIONInventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Omega Forbes, JR., Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper, Michael Johnson
-
Patent number: 8448056Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.Type: GrantFiled: December 17, 2010Date of Patent: May 21, 2013Assignee: Microsoft CorporationInventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Omega Forbes, Jr., Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper
-
Publication number: 20120159290Abstract: Technology for testing a target recognition, analysis, and tracking system is provided. A searchable repository of recorded and synthesized depth clips and associated ground truth tracking data is provided. Data in the repository is used by one or more processing devices each including at least one instance of a target recognition, analysis, and tracking pipeline to analyze performance of the tracking pipeline. An analysis engine provides at least a subset of the searchable set responsive to a request to test the pipeline and receives tracking data output from the pipeline on the at least subset of the searchable set. A report generator outputs an analysis of the tracking data relative to the ground truth in the at least subset to provide an output of the error relative to the ground truth.Type: ApplicationFiled: December 17, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Jon D. Pulsipher, Parham Mohadjer, Nazeeh Amin ElDirghami, Shao Liu, Patrick Orville Cook, James Chadon Foster, Ronald Omega Forbes, JR., Szymon P. Stachniak, Tommer Leyvand, Joseph Bertolami, Michael Taylor Janney, Kien Toan Huynh, Charles Claudius Marais, Spencer Dean Perreault, Robert John Fitzgerald, Wayne Richard Bisson, Craig Carroll Peeper
-
Publication number: 20120105585Abstract: A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods.Type: ApplicationFiled: November 3, 2010Publication date: May 3, 2012Applicant: MICROSOFT CORPORATIONInventors: Prafulla J. Masalkar, Szymon P. Stachniak, Tommer Leyvand, Zhengyou Zhang, Leonardo Del Castillo, Zsolt Mathe