Patents by Inventor Pedro Urbina ESCOS
Pedro Urbina ESCOS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11928856Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: May 5, 2022Date of Patent: March 12, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
-
Publication number: 20240078682Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: ApplicationFiled: November 13, 2023Publication date: March 7, 2024Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
-
Publication number: 20240062528Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: October 30, 2023Publication date: February 22, 2024Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Patent number: 11854211Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: GrantFiled: January 26, 2022Date of Patent: December 26, 2023Assignee: Microsoft Technology Licensing, LLC.Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
-
Patent number: 11842529Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: July 8, 2021Date of Patent: December 12, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
-
Publication number: 20230394743Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: ApplicationFiled: August 18, 2023Publication date: December 7, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Pedro URBINA ESCOS, Dimitrios LYMBEROPOULOS, Di WANG, Emanuel SHALEV
-
Patent number: 11748937Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: GrantFiled: November 8, 2021Date of Patent: September 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Pedro Urbina Escos, Dimitrios Lymberopoulos, Di Wang, Emanuel Shalev
-
Patent number: 11615137Abstract: Various embodiments, methods and systems for implementing a distributed computing system crowdsourcing engine are provided. Initially, a source asset is received from a distributed synthetic data as a service (SDaaS) crowdsource interface. A crowdsource tag is received for the source asset via the distributed SDaaS crowdsource interface. Based in part on the crowdsource tag, the source asset is ingested. Ingesting the source asset comprises automatically computing values for asset-variation parameters of the source asset. The asset-variation parameters are programmable for machine-learning. A crowdsourced synthetic data asset comprising the values for asset-variation parameters is generated.Type: GrantFiled: May 31, 2018Date of Patent: March 28, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti
-
Patent number: 11550841Abstract: Various embodiments, methods and systems for implementing a distributed computing system scene assembly engine are provided. Initially, a selection of a first synthetic data asset and a selection of a second synthetic data asset are received from a distributed synthetic data as a service (SDaaS) integrated development environment (IDE). A synthetic data asset is associated with asset-variation parameters and scene-variation parameters, the asset-variation parameters and scene-variation parameters are programmable for machine-learning. Values for generating a synthetic data scene are received. The values correspond to asset-variation parameters or scene-variation parameters. Based on the values, the synthetic data scene is generated using the first synthetic data asset and the second synthetic data asset.Type: GrantFiled: May 31, 2018Date of Patent: January 10, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Publication number: 20220261516Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: May 5, 2022Publication date: August 18, 2022Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Yuri PEKELNY, Jonathan Chi Hang CHAN, Emanuel SHALEV, Alex KIPMAN, Mark FLICK
-
Patent number: 11354459Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: September 21, 2018Date of Patent: June 7, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
-
Patent number: 11335008Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: GrantFiled: September 18, 2020Date of Patent: May 17, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
-
Publication number: 20220148197Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: ApplicationFiled: January 26, 2022Publication date: May 12, 2022Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
-
Publication number: 20220092792Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: ApplicationFiled: September 18, 2020Publication date: March 24, 2022Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
-
Patent number: 11281996Abstract: Various embodiments, methods and systems for implementing a distributed computing system feedback loop engine are provided. Initially, a training dataset report is accessed. The training dataset report identifies a synthetic data asset having values for asset-variation parameters. The synthetic data asset is associated with a frameset. Based on the training dataset report, the synthetic data asset with a synthetic data asset variation is updated. The frameset is updated using the updated synthetic data asset.Type: GrantFiled: May 31, 2018Date of Patent: March 22, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kamran Zargahi, Michael John Ebstyne, Pedro Urbina Escos, Stephen Michelotti, Emanuel Shalev
-
Publication number: 20220058857Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: ApplicationFiled: November 8, 2021Publication date: February 24, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Pedro URBINA ESCOS, Dimitrios LYMBEROPOULOS, Di WANG, Emanuel SHALEV
-
Patent number: 11250321Abstract: An immersive feedback loop is disclosed for improving artificial intelligence (AI) applications used for virtual reality (VR) environments. Users may iteratively generate synthetic scene training data, train a neural network on the synthetic scene training data, generate synthetic scene evaluation data for an immersive VR experience, indicate additional training data needed to correct neural network errors indicated in the VR experience, and then generate and retrain on the additional training data, until the neural network reaches an acceptable performance level.Type: GrantFiled: May 8, 2018Date of Patent: February 15, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Emanuel Shalev
-
Patent number: 11170559Abstract: A computer device includes a processor configured to simulate a virtual environment based on a set of virtual environment parameters, and perform ray tracing to render a view of the simulated virtual environment. The ray tracing includes generating a plurality of rays for one or more pixels of the rendered view of the simulated virtual environment. The processor is further configured to determine sub-pixel data for each of the plurality of rays based on intersections between the plurality of rays and the simulated virtual environment, and store the determined sub-pixel data for each of the plurality of rays in an image file.Type: GrantFiled: August 2, 2019Date of Patent: November 9, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Pedro Urbina Escos, Dimitrios Lymberopoulos, Di Wang, Emanuel Shalev
-
Publication number: 20210334601Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: July 8, 2021Publication date: October 28, 2021Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Patent number: 11132845Abstract: A method for object recognition includes, at a computing device, receiving an image of a real-world object. An identity of the real-world object is recognized using an object recognition model trained on a plurality of computer-generated training images. A digital augmentation model corresponding to the real-world object is retrieved, the digital augmentation model including a set of augmentation-specific instructions. A pose of the digital augmentation model is aligned with a pose of the real-world object. An augmentation is provided, the augmentation associated with the real-world object and specified by the augmentation-specific instructions.Type: GrantFiled: May 22, 2019Date of Patent: September 28, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Harpreet Singh Sawhney, Andrey Konin, Bilha-Catherine W. Githinji, Amol Ashok Ambardekar, William Douglas Guyman, Muhammad Zeeshan Zia, Ning Xu, Sheng Kai Tang, Pedro Urbina Escos