Patents by Inventor SZYMON PIOTR STACHNIAK
SZYMON PIOTR STACHNIAK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11875027Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.Type: GrantFiled: March 22, 2019Date of Patent: January 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Kenneth Alan Lobb, Mario Esposito, Clinton Chi-Wen Woon
-
Patent number: 11010961Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.Type: GrantFiled: March 13, 2019Date of Patent: May 18, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Ali Osman Ulusoy, Hendrik Mark Langerak, Michelle Brook
-
Patent number: 11010965Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.Type: GrantFiled: July 10, 2020Date of Patent: May 18, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Hendrik Mark Langerak, Michelle Brook
-
Patent number: 10825217Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.Type: GrantFiled: January 2, 2019Date of Patent: November 3, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Ali Osman Ulusoy, Yuri Pekelny, Szymon Piotr Stachniak
-
Publication number: 20200342660Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.Type: ApplicationFiled: July 10, 2020Publication date: October 29, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Szymon Piotr STACHNIAK, Hendrik Mark LANGERAK, Michelle BROOK
-
Patent number: 10740960Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.Type: GrantFiled: March 6, 2019Date of Patent: August 11, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Hendrik Mark Langerak, Michelle Brook
-
Publication number: 20200226823Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.Type: ApplicationFiled: March 6, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Szymon Piotr STACHNIAK, Hendrik Mark LANGERAK, Michelle BROOK
-
Publication number: 20200226820Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.Type: ApplicationFiled: March 13, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Szymon Piotr STACHNIAK, Ali Osman ULUSOY, Hendrik Mark LANGERAK, Michelle BROOK
-
Publication number: 20200211243Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.Type: ApplicationFiled: January 2, 2019Publication date: July 2, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Ali Osman ULUSOY, Yuri PEKELNY, Szymon Piotr STACHNIAK
-
Publication number: 20190220181Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.Type: ApplicationFiled: March 22, 2019Publication date: July 18, 2019Inventors: Szymon Piotr STACHNIAK, Kenneth Alan LOBB, Mario ESPOSITO, Clinton Chi-Wen WOON
-
Patent number: 10248301Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.Type: GrantFiled: September 16, 2015Date of Patent: April 2, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Kenneth Alan Lobb, Mario Esposito, Clinton Chi-Wen Woon
-
Patent number: 10048747Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.Type: GrantFiled: April 21, 2017Date of Patent: August 14, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig Peeper, Shao Liu
-
Patent number: 9886094Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.Type: GrantFiled: April 28, 2014Date of Patent: February 6, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng
-
Publication number: 20170287139Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.Type: ApplicationFiled: April 21, 2017Publication date: October 5, 2017Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig Peeper, Shao Liu
-
Patent number: 9659377Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.Type: GrantFiled: December 15, 2014Date of Patent: May 23, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig Peeper, Shao Liu
-
Patent number: 9582717Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.Type: GrantFiled: October 27, 2014Date of Patent: February 28, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig C. Peeper
-
Publication number: 20160004301Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.Type: ApplicationFiled: September 16, 2015Publication date: January 7, 2016Inventors: SZYMON PIOTR STACHNIAK, KENNETH ALAN LOBB, MARIO ESPOSITO, CLINTON CHI-WEN WOON
-
Publication number: 20150309579Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable describing features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.Type: ApplicationFiled: April 28, 2014Publication date: October 29, 2015Applicant: Microsoft CorporationInventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng
-
Patent number: 9170667Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.Type: GrantFiled: December 21, 2012Date of Patent: October 27, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Szymon Piotr Stachniak, Kenneth Alan Lobb, Mario Esposito, Clinton Chi-Wen Woon
-
Publication number: 20150146923Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.Type: ApplicationFiled: October 27, 2014Publication date: May 28, 2015Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig C. Peeper