Patents by Inventor Jonathan C. HANZELKA

Jonathan C. HANZELKA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078682
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: November 13, 2023
    Publication date: March 7, 2024
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
  • Patent number: 11854211
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: December 26, 2023
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Patent number: 11335008
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: May 17, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
  • Publication number: 20220148197
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: January 26, 2022
    Publication date: May 12, 2022
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
  • Patent number: 11315300
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: April 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
  • Publication number: 20220092792
    Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
  • Patent number: 11030458
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: June 8, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
  • Publication number: 20200342654
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Application
    Filed: June 30, 2020
    Publication date: October 29, 2020
    Inventors: Michael J. EBSTYNE, Jonathan C. HANZELKA, Emanuel SHALEV, Trebor L. CONNELL, Pedro U. ESCOS
  • Patent number: 10740949
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
  • Publication number: 20200090398
    Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 19, 2020
    Inventors: Michael J. EBSTYNE, Jonathan C. HANZELKA, Emanuel SHALEV, Trebor L. CONNELL, Pedro U. ESCOS
  • Publication number: 20200089954
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Application
    Filed: September 14, 2018
    Publication date: March 19, 2020
    Inventors: Muhammad Zeeshan ZIA, Emanuel SHALEV, Jonathan C. HANZELKA, Harpreet S. SAWHNEY, Pedro U. ESCOS, Michael J. EBSTYNE