Patents by Inventor Ziguo Zhong

Ziguo Zhong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11885872
    Abstract: A method for camera radar fusion includes receiving, by the processor, radar object detection data for an object and modeling, by a processor, a three dimensional (3D) physical space kinematic model, including updating 3D coordinates of the object, to generate updated 3D coordinates of the object, in response to receiving the radar object detection data for the object. The method also includes transforming, by the processor, the updated 3D coordinates of the object to updated two dimensional (2D) coordinates of the object, based on a 2D-3D calibrated mapping table and modeling, by the processor, a two dimensional (2D) image plane kinematic model, while modeling the 3D physical space kinematic model, where modeling the 2D image plane kinematic model includes updating coordinates of the object based on the updated 2D coordinates of the object.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: January 30, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Ziguo Zhong, Liu Hui-che
  • Publication number: 20210041555
    Abstract: A method for camera radar fusion includes receiving, by the processor, radar object detection data for an object and modeling, by a processor, a three dimensional (3D) physical space kinematic model, including updating 3D coordinates of the object, to generate updated 3D coordinates of the object, in response to receiving the radar object detection data for the object. The method also includes transforming, by the processor, the updated 3D coordinates of the object to updated two dimensional (2D) coordinates of the object, based on a 2D-3D calibrated mapping table and modeling, by the processor, a two dimensional (2D) image plane kinematic model, while modeling the 3D physical space kinematic model, where modeling the 2D image plane kinematic model includes updating coordinates of the object based on the updated 2D coordinates of the object.
    Type: Application
    Filed: October 27, 2020
    Publication date: February 11, 2021
    Inventors: Ziguo Zhong, Liu Hui-che
  • Patent number: 10852419
    Abstract: A method for camera radar fusion includes receiving, by the processor, radar object detection data for an object and modeling, by a processor, a three dimensional (3D) physical space kinematic model, including updating 3D coordinates of the object, to generate updated 3D coordinates of the object, in response to receiving the radar object detection data for the object. The method also includes transforming, by the processor, the updated 3D coordinates of the object to updated two dimensional (2D) coordinates of the object, based on a 2D-3D calibrated mapping table and modeling, by the processor, a two dimensional (2D) image plane kinematic model, while modeling the 3D physical space kinematic model, where modeling the 2D image plane kinematic model includes updating coordinates of the object based on the updated 2D coordinates of the object.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: December 1, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Ziguo Zhong, Liu Hui-che
  • Patent number: 10452960
    Abstract: An image classification system includes a convolutional neural network, a confidence predictor, and a fusion classifier. The convolutional neural network is configured to assign a plurality of probability values to each pixel of a first image of a scene and a second image of the scene. Each of the probability values corresponds to a different feature that the convolutional neural network is trained to identify. The confidence predictor is configured to assign a confidence value to each pixel of the first image and to each pixel of the second image. The confidence values correspond to a greatest of the probability values generated by the convolutional neural network for each pixel. The fusion classifier is configured to assign, to each pixel of the first image, a feature that corresponds to a higher of the confidence values assigned to the pixel of the first image and the second image.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: October 22, 2019
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Yingmao Li, Vikram VijayanBabu Appia, Ziguo Zhong, Tarek Aziz Lahlou
  • Publication number: 20190120955
    Abstract: A method for camera radar fusion includes receiving, by the processor, radar object detection data for an object and modeling, by a processor, a three dimensional (3D) physical space kinematic model, including updating 3D coordinates of the object, to generate updated 3D coordinates of the object, in response to receiving the radar object detection data for the object. The method also includes transforming, by the processor, the updated 3D coordinates of the object to updated two dimensional (2D) coordinates of the object, based on a 2D-3D calibrated mapping table and modeling, by the processor, a two dimensional (2D) image plane kinematic model, while modeling the 3D physical space kinematic model, where modeling the 2D image plane kinematic model includes updating coordinates of the object based on the updated 2D coordinates of the object.
    Type: Application
    Filed: February 8, 2018
    Publication date: April 25, 2019
    Inventors: Ziguo Zhong, Liu Hui-che