Patents by Inventor SZYMON PIOTR STACHNIAK

SZYMON PIOTR STACHNIAK has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11875027
    Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.
    Type: Grant
    Filed: March 22, 2019
    Date of Patent: January 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Kenneth Alan Lobb, Mario Esposito, Clinton Chi-Wen Woon
  • Patent number: 11010961
    Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: May 18, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Ali Osman Ulusoy, Hendrik Mark Langerak, Michelle Brook
  • Patent number: 11010965
    Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: May 18, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Hendrik Mark Langerak, Michelle Brook
  • Patent number: 10825217
    Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ali Osman Ulusoy, Yuri Pekelny, Szymon Piotr Stachniak
  • Publication number: 20200342660
    Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.
    Type: Application
    Filed: July 10, 2020
    Publication date: October 29, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr STACHNIAK, Hendrik Mark LANGERAK, Michelle BROOK
  • Patent number: 10740960
    Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Hendrik Mark Langerak, Michelle Brook
  • Publication number: 20200226823
    Abstract: An augmented reality device includes a logic machine and a storage machine holding instructions executable by the logic machine to, for one or more real-world surfaces represented in a three-dimensional representation of a real-world environment of the augmented reality device, fit a virtual two-dimensional plane to the real-world surface. A request to place a virtual three-dimensional object on the real-world surface is received. For each of a plurality of candidate placement locations on the virtual two-dimensional plane, the candidate placement location is evaluated as a valid placement location or an invalid placement location for the virtual three-dimensional object. An invalidation mask is generated that defines the valid and invalid placement locations on the virtual two-dimensional plane.
    Type: Application
    Filed: March 6, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr STACHNIAK, Hendrik Mark LANGERAK, Michelle BROOK
  • Publication number: 20200226820
    Abstract: A computer system is provided that includes a camera device and a processor configured to receive scene data captured by the camera device for a three-dimensional environment that includes one or more physical objects, generate a geometric representation of the scene data, process the scene data using an artificial intelligence machine learning model that outputs object boundary data and object labels, augment the geometric representation with the object boundary data and the object labels, and identify the one or more physical objects based on the augmented geometric representation of the three-dimensional environment. For each identified physical object, the processor is configured to generate an associated virtual object that is fit to one or more geometric characteristics of that identified physical object. The processor is further configured to track each identified physical object and associated virtual object across successive updates to the scene data.
    Type: Application
    Filed: March 13, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr STACHNIAK, Ali Osman ULUSOY, Hendrik Mark LANGERAK, Michelle BROOK
  • Publication number: 20200211243
    Abstract: A computing system is provided, including one or more optical sensors, a display, one or more user input devices, and a processor. The processor may receive optical data of a physical environment. Based on the optical data, the processor may generate a three-dimensional representation of the physical environment. For at least one target region of the physical environment, the processor may generate a three-dimensional bounding volume surrounding the target region based on a depth profile measured by the one or more optical sensors and/or estimated by the processor. The processor may generate a two-dimensional bounding shape at least in part by projecting the three-dimensional bounding volume onto an imaging surface of an optical sensor. The processor may output an image of the physical environment and the two-dimensional bounding shape for display. The processor may receive a user input and modify the two-dimensional bounding shape based on the user input.
    Type: Application
    Filed: January 2, 2019
    Publication date: July 2, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ali Osman ULUSOY, Yuri PEKELNY, Szymon Piotr STACHNIAK
  • Publication number: 20190220181
    Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.
    Type: Application
    Filed: March 22, 2019
    Publication date: July 18, 2019
    Inventors: Szymon Piotr STACHNIAK, Kenneth Alan LOBB, Mario ESPOSITO, Clinton Chi-Wen WOON
  • Patent number: 10248301
    Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: April 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Kenneth Alan Lobb, Mario Esposito, Clinton Chi-Wen Woon
  • Patent number: 10048747
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: August 14, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig Peeper, Shao Liu
  • Patent number: 9886094
    Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng
  • Publication number: 20170287139
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.
    Type: Application
    Filed: April 21, 2017
    Publication date: October 5, 2017
    Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig Peeper, Shao Liu
  • Patent number: 9659377
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A background included in the grid of voxels may also be removed to isolate one or more voxels associated with a foreground object such as a human target. A location or position of one or more extremities of the isolated human target may then be determined.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: May 23, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig Peeper, Shao Liu
  • Patent number: 9582717
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: February 28, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig C. Peeper
  • Publication number: 20160004301
    Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.
    Type: Application
    Filed: September 16, 2015
    Publication date: January 7, 2016
    Inventors: SZYMON PIOTR STACHNIAK, KENNETH ALAN LOBB, MARIO ESPOSITO, CLINTON CHI-WEN WOON
  • Publication number: 20150309579
    Abstract: Low-latency gesture detection is described, for example, to compute a gesture class from a live stream of image frames of a user making a gesture, for example, as part of a natural user interface controlling a game system or other system. In examples, machine learning components are trained to learn gesture primitives and at test time, are able to detect gestures using the learned primitives, in a fast, accurate manner. For example, a gesture primitive is a latent (unobserved) variable describing features of a subset of frames from a sequence of frames depicting a gesture. For example, the subset of frames has many fewer frames than a sequence of frames depicting a complete gesture. In various examples gesture primitives are learnt from instance level features computed by aggregating frame level features to capture temporal structure. In examples frame level features comprise body position and body part articulation state features.
    Type: Application
    Filed: April 28, 2014
    Publication date: October 29, 2015
    Applicant: Microsoft Corporation
    Inventors: Baoyuan Wang, Szymon Piotr Stachniak, Zhuowen Tu, Baining Guo, Ke Deng
  • Patent number: 9170667
    Abstract: Embodiments of the present invention analyze a context in which a user interacts with a computer interface and automatically optimizes the interface for the context. The controller or control mode the user selects for interaction may define the context, in part. Examples of control modes include gesturing, audio control, use of companion devices, and use of dedicated control devices, such as game controllers and remote controls. The different input devices are designed for different tasks. Nevertheless, a user will frequently attempt to perform a task using a control input that is not adapted for the task. Embodiments of the present invention change the characteristics of the user interface to make it easier for the user to complete an intended task using the input device of the user's choice.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: October 27, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Szymon Piotr Stachniak, Kenneth Alan Lobb, Mario Esposito, Clinton Chi-Wen Woon
  • Publication number: 20150146923
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. A grid of voxels may then be generated based on the depth image such that the depth image may be downsampled. A model may be adjusted based on a location or position of one or more extremities estimated or determined for a human target in the grid of voxels. The model may also be adjusted based on a default location or position of the model in a default pose such as a T-pose, a DaVinci pose, and/or a natural pose.
    Type: Application
    Filed: October 27, 2014
    Publication date: May 28, 2015
    Inventors: Johnny Chung Lee, Tommer Leyvand, Szymon Piotr Stachniak, Craig C. Peeper