Patents by Inventor Michael John Ebstyne
Michael John Ebstyne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11615137Abstract: Various embodiments, methods and systems for implementing a distributed computing system crowdsourcing engine are provided. Initially, a source asset is received from a distributed synthetic data as a service (SDaaS) crowdsource interface. A crowdsource tag is received for the source asset via the distributed SDaaS crowdsource interface. Based in part on the crowdsource tag, the source asset is ingested. Ingesting the source asset comprises automatically computing values for asset-variation parameters of the source asset. The asset-variation parameters are programmable for machine-learning. A crowdsourced synthetic data asset comprising the values for asset-variation parameters is generated.Type: GrantFiled: May 31, 2018Date of Patent: March 28, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti
-
Patent number: 11550841Abstract: Various embodiments, methods and systems for implementing a distributed computing system scene assembly engine are provided. Initially, a selection of a first synthetic data asset and a selection of a second synthetic data asset are received from a distributed synthetic data as a service (SDaaS) integrated development environment (IDE). A synthetic data asset is associated with asset-variation parameters and scene-variation parameters, the asset-variation parameters and scene-variation parameters are programmable for machine-learning. Values for generating a synthetic data scene are received. The values correspond to asset-variation parameters or scene-variation parameters. Based on the values, the synthetic data scene is generated using the first synthetic data asset and the second synthetic data asset.Type: GrantFiled: May 31, 2018Date of Patent: January 10, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Patent number: 11281996Abstract: Various embodiments, methods and systems for implementing a distributed computing system feedback loop engine are provided. Initially, a training dataset report is accessed. The training dataset report identifies a synthetic data asset having values for asset-variation parameters. The synthetic data asset is associated with a frameset. Based on the training dataset report, the synthetic data asset with a synthetic data asset variation is updated. The frameset is updated using the updated synthetic data asset.Type: GrantFiled: May 31, 2018Date of Patent: March 22, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Patent number: 11023517Abstract: Various embodiments, methods and systems for implementing a distributed computing frameset assembly engine are provided. Initially, a synthetic data scene is accessed. A first set of values for scene-variation parameters is determined. The first set of values is automatically determined for generating a synthetic data scene frameset. The synthetic data scene frameset is generated based on the first set of values. The synthetic data scene frameset comprises at least a first frame in the frameset comprising the synthetic data scene updated based on a value for a scene-variation parameter. The synthetic data scene frameset is stored.Type: GrantFiled: May 31, 2018Date of Patent: June 1, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Patent number: 10909423Abstract: Data representing a scene is received. The scene includes labeled elements such as walls, a floor, a ceiling, and objects placed at various locations in the scene. The original received scene may be modified in different ways to create new scenes that are based on the original scene. These modifications include adding clutter to the scene, moving one or more elements of the scene, swapping one or more elements of the scene with different labeled elements, changing the size, color, or materials associated with one or more of the elements of the scene, and changing the lighting used in the scene. Each new scene may be used to generate labeled training data for a classifier by placing a virtual sensor (e.g., a camera) in the new scene, and generating sensor output data for the virtual sensor based on its placement in the new scene.Type: GrantFiled: June 7, 2018Date of Patent: February 2, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Trebor Lee Connell, Emanuel Shalev, Michael John Ebstyne, Don Dongwoo Kim
-
Patent number: 10877927Abstract: Various embodiments, methods and systems for implementing a distributed computing system asset assembly engine are provided. Initially a first source asset is received from a first distributed Synthetic Data as a Service (SDaaS) upload interface. A second source asset is received from a second a distributed SDaaS upload interface. The first source asset and the second source asset are ingested. Ingesting a source asset comprises automatically computing values for asset-variation parameters of the source asset, where the asset-variation parameters are programmable for machine-learning. A first synthetic data asset comprising a first set of values for the asset-variation parameters is generated. A second synthetic data asset comprising a second set of values for the asset-variation parameters is generated. The first synthetic data asset and the second synthetic data asset in a synthetic data asset are stored.Type: GrantFiled: May 31, 2018Date of Patent: December 29, 2020Assignee: MICROSOFTTECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Publication number: 20200372121Abstract: Various embodiments, methods and systems for implementing a distributed computing system crowdsourcing engine are provided. Initially, a source asset is received from a distributed synthetic data as a service (SDaaS) crowdsource interface. A crowdsource tag is received for the source asset via the distributed SDaaS crowdsource interface. Based in part on the crowdsource tag, the source asset is ingested. Ingesting the source asset comprises automatically computing values for asset-variation parameters of the source asset. The asset-variation parameters are programmable for machine-learning. A crowdsourced synthetic data asset comprising the values for asset-variation parameters is generated.Type: ApplicationFiled: May 31, 2018Publication date: November 26, 2020Inventors: Kamran ZARGAHI, Michael John EBSTYNE, Pedro Urbina ESCOS, Stephen MICHELOTTI
-
Publication number: 20200371989Abstract: Various embodiments, methods and systems for implementing a distributed computing system feedback loop engine are provided. Initially, a training dataset report is accessed. The training dataset report identifies a synthetic data asset having values for asset-variation parameters. The synthetic data asset is associated with a frameset. Based on the training dataset report, the synthetic data asset with a synthetic data asset variation is updated. The frameset is updated using the updated synthetic data asset.Type: ApplicationFiled: May 31, 2018Publication date: November 26, 2020Inventors: Kamran ZARGAHI, Michael John EBSTYNE, Pedro Urbina ESCOS, Stephen MICHELOTTI, Emanuel SHALEV
-
Publication number: 20200372118Abstract: Various embodiments, methods and systems for implementing a distributed computing system asset assembly engine are provided. Initially a first source asset is received from a first distributed Synthetic Data as a Service (SDaaS) upload interface. A second source asset is received from a second a distributed SDaaS upload interface. The first source asset and the second source asset are ingested. Ingesting a source asset comprises automatically computing values for asset-variation parameters of the source asset, where the asset-variation parameters are programmable for machine-learning. A first synthetic data asset comprising a first set of values for the asset-variation parameters is generated. A second synthetic data asset comprising a second set of values for the asset-variation parameters is generated. The first synthetic data asset and the second synthetic data asset in a synthetic data asset are stored.Type: ApplicationFiled: May 31, 2018Publication date: November 26, 2020Inventors: Kamran ZARGAHI, Michael John EBSTYNE, Pedro Urbina ESCOS, Stephen MICHELOTTI, Emanuel SHALEV
-
Publication number: 20200372119Abstract: Various embodiments, methods and systems for implementing a distributed computing system scene assembly engine are provided. Initially, a selection of a first synthetic data asset and a selection of a second synthetic data asset are received from a distributed synthetic data as a service (SDaaS) integrated development environment (IDE). A synthetic data asset is associated with asset-variation parameters and scene-variation parameters, the asset-variation parameters and scene-variation parameters are programmable for machine-learning. Values for generating a synthetic data scene are received. The values correspond to asset-variation parameters or scene-variation parameters. Based on the values, the synthetic data scene is generated using the first synthetic data asset and the second synthetic data asset.Type: ApplicationFiled: May 31, 2018Publication date: November 26, 2020Inventors: Kamran ZARGAHI, Michael John EBSTYNE, Pedro Urbina ESCOS, Stephen MICHELOTTI, Emanuel SHALEV
-
Publication number: 20200372120Abstract: Various embodiments, methods and systems for implementing a distributed computing frameset assembly engine are provided. Initially, a synthetic data scene is accessed. A first set of values for scene-variation parameters is determined. The first set of values is automatically determined for generating a synthetic data scene frameset. The synthetic data scene frameset is generated based on the first set of values. The synthetic data scene frameset comprises at least a first frame in the frameset comprising the synthetic data scene updated based on a value for a scene-variation parameter. The synthetic data scene frameset is stored.Type: ApplicationFiled: May 31, 2018Publication date: November 26, 2020Inventors: Kamran ZARGAHI, Michael John EBSTYNE, Pedro Urbina ESCOS, Stephen MICHELOTTI, Emanuel SHALEV
-
Publication number: 20190377980Abstract: Data representing a scene is received. The scene includes labeled elements such as walls, a floor, a ceiling, and objects placed at various locations in the scene. The original received scene may be modified in different ways to create new scenes that are based on the original scene. These modifications include adding clutter to the scene, moving one or more elements of the scene, swapping one or more elements of the scene with different labeled elements, changing the size, color, or materials associated with one or more of the elements of the scene, and changing the lighting used in the scene. Each new scene may be used to generate labeled training data for a classifier by placing a virtual sensor (e.g., a camera) in the new scene, and generating sensor output data for the virtual sensor based on its placement in the new scene.Type: ApplicationFiled: June 7, 2018Publication date: December 12, 2019Inventors: Trebor Lee CONNELL, Emanuel SHALEV, Michael John EBSTYNE, Don Dongwoo KIM
-
Patent number: 9767609Abstract: Embodiments are disclosed that relate to determining a pose of a device. One disclosed embodiment provides a method comprising receiving sensor information from one or more sensors of the device, and selecting a motion-family model from a plurality of different motion-family models based on the sensor information. The method further comprises providing the sensor information to the selected motion-family model and outputting an estimated pose of the device according to the selected motion-family model.Type: GrantFiled: February 12, 2014Date of Patent: September 19, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ethan Eade, Michael John Ebstyne, Frederick Schaffalitzky, Drew Steedly
-
Patent number: 9759918Abstract: Embodiments related to mapping an environment of a machine-vision system are disclosed. For example, one disclosed method includes acquiring image data resolving one or more reference features of an environment and computing a parameter value based on the image data, wherein the parameter value is responsive to physical deformation of the machine-vision system.Type: GrantFiled: May 1, 2014Date of Patent: September 12, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael John Ebstyne, Frederik Schaffalitzky, Drew Steedly, Georg Klein, Ethan Eade, Michael Grabner
-
Patent number: 9721362Abstract: Auto-completion of an input partial line pattern. Upon detecting that the user has input the partial line pattern, the scope of the input partial line pattern is matched against corresponding line patterns from a collection of line pattern representations to form a scoped match set of line pattern representations. For one or more of the line pattern representations in the scoped match set, a visualization of completion options is then provided. For example, the corresponding line pattern representation might be displayed in a distinct portion of the display as compared to the input partial line pattern, or perhaps in the same portion in which case, in which case the remaining portion of the line pattern representation might extend off of the input partial line pattern representation.Type: GrantFiled: April 24, 2013Date of Patent: August 1, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Adam Smolinski, Michael John Ebstyne
-
Patent number: 9495801Abstract: An augmented reality device including a plurality of sensors configured to output pose information indicating a pose of the augmented reality device. The augmented reality device further includes a band-agnostic filter and a band-specific filter. The band-specific filter includes an error correction algorithm configured to receive pose information as filtered by the band-agnostic filter and reduce a tracking error of the pose information in a selected frequency band. The augmented reality device further includes a display engine configured to position a virtual object on a see-through display as a function of the pose information as filtered by the band-agnostic filter and the band-specific filter.Type: GrantFiled: May 1, 2014Date of Patent: November 15, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael John Ebstyne, Frederik Schaffalitzky, Drew Steedly, Calvin Chan, Ethan Eade, Alex Kipman, Georg Klein
-
Patent number: 9430038Abstract: Embodiments that relate to communicating to a user of a head-mounted display device an estimated quality level of a world-lock display mode are disclosed. For example, in one disclosed embodiment a sensor data is received from one or more sensors of the device. Using the sensor data, an estimated pose of the device is determined. Using the estimated pose, one or more virtual objects are displayed via the device in either the world-lock display mode or in a body-lock display mode. One or more of input uncertainty values of the sensor data and pose uncertainty values of the estimated pose are determined. The input uncertainty values and/or pose uncertainty values are mapped to the estimated quality level of the world-lock display mode. Feedback of the estimated quality level is communicated to a user via device.Type: GrantFiled: May 1, 2014Date of Patent: August 30, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael John Ebstyne, Frederik Schaffalitzky, Drew Steedly, Ethan Eade, Martin Shetter, Michael Grabner
-
Patent number: 9361732Abstract: Various embodiments relating to controlling a see-through display are disclosed. In one embodiment, virtual objects may be displayed on the see-through display. The virtual objects transition between having a position that is body-locked and a position that is world-locked based on various transition events.Type: GrantFiled: May 1, 2014Date of Patent: June 7, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael John Ebstyne, Frederik Schaffalitzky, Stephen Latta, Paul Albert Lalonde, Drew Steedly, Alex Kipman, Ethan Eade
-
Patent number: 9317125Abstract: The gesture-based searching of a line pattern representation amongst a collection of line pattern representations. Upon detecting an input gesture, a computing system matches the input gesture against each of multiple pattern representations. Each line pattern representation represents a line pattern having a changing value in a first dimension as a function of a value in a second dimension. At least some of the matched set may then be visualized to the user. The input gesture may be a literal line pattern to match against, or might be a gesture that has semantic meaning that describes search parameters of a line pattern to search for. The matched set may be presented so that a display parameter conveys a closeness of the match.Type: GrantFiled: April 24, 2013Date of Patent: April 19, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Adam Smolinski, Michael John Ebstyne
-
Patent number: 9275480Abstract: The encoding of a line pattern representation. The line pattern representation has a changing value in a first dimension as a function of a value in a second dimension. The line pattern representation is segmented into multiple segments along the second dimension. The line pattern representation is then encoded by assigning a quantized value to each of the segments based on the changing value of the line pattern in the first dimension as present within the corresponding segment. If the line pattern generally falls within a given range within a segment, the segment will be assigned a quantized value corresponding to that range. The encoding may be used to assign the line pattern representation into a category.Type: GrantFiled: April 24, 2013Date of Patent: March 1, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Adam Smolinski, Michael John Ebstyne