Patents by Inventor Pedro U. ESCOS

Pedro U. ESCOS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11315300
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: April 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
  • Patent number: 11030458
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: June 8, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
  • Publication number: 20200342654
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Application
    Filed: June 30, 2020
    Publication date: October 29, 2020
    Inventors: Michael J. EBSTYNE, Jonathan C. HANZELKA, Emanuel SHALEV, Trebor L. CONNELL, Pedro U. ESCOS
  • Patent number: 10740949
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
  • Publication number: 20200090398
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Michael J. EBSTYNE, Jonathan C. HANZELKA, Emanuel SHALEV, Trebor L. CONNELL, Pedro U. ESCOS
  • Publication number: 20200089954
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Application
    Filed: September 14, 2018
    Publication date: March 19, 2020
    Inventors: Muhammad Zeeshan ZIA, Emanuel SHALEV, Jonathan C. HANZELKA, Harpreet S. SAWHNEY, Pedro U. ESCOS, Michael J. EBSTYNE
  • Publication number: 20190340317
    Abstract: Systems and methods are disclosed for using a synthetic world interface to model environments, sensors, and platforms, such as for computer vision sensor platform design. Digital models may be passed through a simulation service to generate synthetic experiment data. Systematic sweeps of parameters for various components of the sensor or platform design under test, under multiple environmental conditions, can facilitate time- and cost-efficient engineering efforts by revealing parameter sensitivities and environmental effects for multiple proposed configurations. Searches through the generated synthetic experimental data results can permit rapid identification of desirable design configuration candidates.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Jonathan Chi Hang CHAN, Michael EBSTYNE, Alex A. KIPMAN, Pedro U. ESCOS, Andrew C. GORIS