Patents by Inventor Emanuel Shalev
Emanuel Shalev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11928856Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: May 5, 2022Date of Patent: March 12, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
-
Publication number: 20240062528Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: October 30, 2023Publication date: February 22, 2024Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Patent number: 11842529Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: July 8, 2021Date of Patent: December 12, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
-
Publication number: 20230394743Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: ApplicationFiled: August 18, 2023Publication date: December 7, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Pedro URBINA ESCOS, Dimitrios LYMBEROPOULOS, Di WANG, Emanuel SHALEV
-
Patent number: 11748937Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: GrantFiled: November 8, 2021Date of Patent: September 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Pedro Urbina Escos, Dimitrios Lymberopoulos, Di Wang, Emanuel Shalev
-
Patent number: 11550841Abstract: Various embodiments, methods and systems for implementing a distributed computing system scene assembly engine are provided. Initially, a selection of a first synthetic data asset and a selection of a second synthetic data asset are received from a distributed synthetic data as a service (SDaaS) integrated development environment (IDE). A synthetic data asset is associated with asset-variation parameters and scene-variation parameters, the asset-variation parameters and scene-variation parameters are programmable for machine-learning. Values for generating a synthetic data scene are received. The values correspond to asset-variation parameters or scene-variation parameters. Based on the values, the synthetic data scene is generated using the first synthetic data asset and the second synthetic data asset.Type: GrantFiled: May 31, 2018Date of Patent: January 10, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Publication number: 20220261516Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: May 5, 2022Publication date: August 18, 2022Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Yuri PEKELNY, Jonathan Chi Hang CHAN, Emanuel SHALEV, Alex KIPMAN, Mark FLICK
-
Patent number: 11354459Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: September 21, 2018Date of Patent: June 7, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
-
Patent number: 11315300Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.Type: GrantFiled: June 30, 2020Date of Patent: April 26, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
-
Patent number: 11281996Abstract: Various embodiments, methods and systems for implementing a distributed computing system feedback loop engine are provided. Initially, a training dataset report is accessed. The training dataset report identifies a synthetic data asset having values for asset-variation parameters. The synthetic data asset is associated with a frameset. Based on the training dataset report, the synthetic data asset with a synthetic data asset variation is updated. The frameset is updated using the updated synthetic data asset.Type: GrantFiled: May 31, 2018Date of Patent: March 22, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Publication number: 20220058857Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: ApplicationFiled: November 8, 2021Publication date: February 24, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Pedro URBINA ESCOS, Dimitrios LYMBEROPOULOS, Di WANG, Emanuel SHALEV
-
Patent number: 11250321Abstract: An immersive feedback loop is disclosed for improving artificial intelligence (AI) applications used for virtual reality (VR) environments. Users may iteratively generate synthetic scene training data, train a neural network on the synthetic scene training data, generate synthetic scene evaluation data for an immersive VR experience, indicate additional training data needed to correct neural network errors indicated in the VR experience, and then generate and retrain on the additional training data, until the neural network reaches an acceptable performance level.Type: GrantFiled: May 8, 2018Date of Patent: February 15, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Emanuel Shalev
-
Patent number: 11170559Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: GrantFiled: August 2, 2019Date of Patent: November 9, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Pedro Urbina Escos, Dimitrios Lymberopoulos, Di Wang, Emanuel Shalev
-
Publication number: 20210334601Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: July 8, 2021Publication date: October 28, 2021Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Patent number: 11106943Abstract: A computer implemented method includes obtaining a first deep neural network (DNN) model trained on labeled real image data for a downstream vision task, obtaining a second DNN model trained on synthetic images created with random image parameter values for the downstream vision task, obtaining a third DNN model trained on the labeled real image data and the synthetic images for the downstream vision task, performing a forward pass execution of each model to generate a loss, backpropagating the loss to modify parameter values, and iterating the forward pass execution and backpropagating with images generated by the modified parameters to jointly train the models and optimize the parameters.Type: GrantFiled: July 23, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Yuri Pekelny, Pedro Urbina Escos, Emanuel Shalev, Di Wang, Dimitrios Lymperopoulos
-
Patent number: 11087176Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: May 8, 2018Date of Patent: August 10, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
-
Patent number: 11030458Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.Type: GrantFiled: September 14, 2018Date of Patent: June 8, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
-
Patent number: 11023517Abstract: Various embodiments, methods and systems for implementing a distributed computing frameset assembly engine are provided. Initially, a synthetic data scene is accessed. A first set of values for scene-variation parameters is determined. The first set of values is automatically determined for generating a synthetic data scene frameset. The synthetic data scene frameset is generated based on the first set of values. The synthetic data scene frameset comprises at least a first frame in the frameset comprising the synthetic data scene updated based on a value for a scene-variation parameter. The synthetic data scene frameset is stored.Type: GrantFiled: May 31, 2018Date of Patent: June 1, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Patent number: 11003812Abstract: A feedback loop, for experience driven development of mixed reality (MR) devices, simulates application performance using various synthetic MR device configurations. Examples display, using an application, a virtual object on a first MR device, during a recording session; record, during the recording session, sensor data from the first MR device; simulate sensor data, based at least on the recorded sensor data, for the virtual object on simulated MR devices having various configurations of simulated sensors, during simulation sessions; and generate displays, using the application, of the virtual object on the simulated MR devices, during playback sessions.Type: GrantFiled: November 20, 2018Date of Patent: May 11, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Emanuel Shalev, Yuri Pekelny, Pedro Urbina Escos
-
Publication number: 20210035350Abstract: A computer device is provided that includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: ApplicationFiled: August 2, 2019Publication date: February 4, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Pedro URBINA ESCOS, Dimitrios LYMBEROPOULOS, Di WANG, Emanuel SHALEV