Patents by Inventor Jonathan C. HANZELKA
Jonathan C. HANZELKA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240078682Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: ApplicationFiled: November 13, 2023Publication date: March 7, 2024Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
-
Patent number: 11854211Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: GrantFiled: January 26, 2022Date of Patent: December 26, 2023Assignee: Microsoft Technology Licensing, LLC.Inventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
-
Patent number: 11335008Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: GrantFiled: September 18, 2020Date of Patent: May 17, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ishani Chakraborty, Jonathan C. Hanzelka, Lu Yuan, Pedro Urbina Escos, Thomas M. Soemo
-
Publication number: 20220148197Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: ApplicationFiled: January 26, 2022Publication date: May 12, 2022Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
-
Patent number: 11315300Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.Type: GrantFiled: June 30, 2020Date of Patent: April 26, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
-
Publication number: 20220092792Abstract: Training a multi-object tracking model includes: generating a plurality of training images based at least on scene generation information, each training image comprising a plurality of objects to be tracked; generating, for each training image, original simulated data based at least on the scene generation information, the original simulated data comprising tag data for a first object; locating, within the original simulated data, tag data for the first object, based on at least an anomaly alert (e.g., occlusion alert, proximity alert, motion alert) associated with the first object in the first training image; based at least on locating the tag data for the first object, modifying at least a portion of the tag data for the first object from the original simulated data, thereby generating preprocessed training data from the original simulated data; and training a multi-object tracking model with the preprocessed training data to produce a trained multi-object tracker.Type: ApplicationFiled: September 18, 2020Publication date: March 24, 2022Inventors: Ishani CHAKRABORTY, Jonathan C. HANZELKA, Lu YUAN, Pedro Urbina ESCOS, Thomas M. SOEMO
-
Patent number: 11030458Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.Type: GrantFiled: September 14, 2018Date of Patent: June 8, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
-
Publication number: 20200342654Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.Type: ApplicationFiled: June 30, 2020Publication date: October 29, 2020Inventors: Michael J. EBSTYNE, Jonathan C. HANZELKA, Emanuel SHALEV, Trebor L. CONNELL, Pedro U. ESCOS
-
Patent number: 10740949Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.Type: GrantFiled: September 18, 2018Date of Patent: August 11, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Michael J. Ebstyne, Jonathan C. Hanzelka, Emanuel Shalev, Trebor L. Connell, Pedro U. Escos
-
Publication number: 20200090398Abstract: Systems and methods are disclosed for leveraging rendering engines to perform multi-spectral rendering by reusing the color channels for additional spectral bands. A digital asset represented by a three dimensional (3D) mesh and a material reference pointer may be rendered using a first material spectral band data set and additionally rendered using a second material spectral band data set, and the results combined to create a multi-spectral rendering. The multi-spectral rendering may then be used as part of a synthetics service or operation. By abstracting the material properties, a material translator is able to return a banded material data set from among a plurality of spectral band sets, and asset material information may advantageously be managed apart from managing each asset individually.Type: ApplicationFiled: September 18, 2018Publication date: March 19, 2020Inventors: Michael J. EBSTYNE, Jonathan C. HANZELKA, Emanuel SHALEV, Trebor L. CONNELL, Pedro U. ESCOS
-
Publication number: 20200089954Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.Type: ApplicationFiled: September 14, 2018Publication date: March 19, 2020Inventors: Muhammad Zeeshan ZIA, Emanuel SHALEV, Jonathan C. HANZELKA, Harpreet S. SAWHNEY, Pedro U. ESCOS, Michael J. EBSTYNE