Patents by Inventor Koyel Banerjee

Koyel Banerjee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11733386
    Abstract: Object detection in a scene is based on lidar data and radar data of the scene. The lidar data and the radar data are transformed to a common coordinate system. Different radar point clusters are extracted from the radar data. Different lidar point clusters are extracted from the lidar data and each lidar point cluster is associated with a target object. A target object's velocity is estimated based on the movement of the respective lidar point cluster between consecutive lidar images. The estimated target object's velocity is compared with velocity information of a corresponding radar point cluster to identify corresponding radar and lidar point clusters.
    Type: Grant
    Filed: November 22, 2018
    Date of Patent: August 22, 2023
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventors: Koyel Banerjee, Sumanth Nirmal Gavarraju, Mingkang He
  • Patent number: 11131753
    Abstract: A method for a vehicle obtains camera sensor data of a camera of the vehicle. The method further obtains lidar sensor data of a lidar sensor of the vehicle. The method determines information related to a motion of the vehicle. The method determines a combined image of the camera sensor data and the lidar sensor data based on the information related to the motion of the vehicle.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: September 28, 2021
    Assignee: Bayerische Motoren Werke Aktiengesellschaft
    Inventors: Koyel Banerjee, Dominik Notz, Johannes Windelen
  • Publication number: 20200301013
    Abstract: Object detection in a scene is based on lidar data and radar data of the scene. The lidar data and the radar data are transformed to a common coordinate system. Different radar point clusters are extracted from the radar data. Different lidar point clusters are extracted from the lidar data and each lidar point cluster is associated with a target object. A target object's velocity is estimated based on the movement of the respective lidar point cluster between consecutive lidar images. The estimated target object's velocity is compared with velocity information of a corresponding radar point cluster to identify corresponding radar and lidar point clusters.
    Type: Application
    Filed: November 22, 2018
    Publication date: September 24, 2020
    Inventors: Koyel BANERJEE, Sumanth Nirmal GAVARRAJU, Mingkang HE
  • Publication number: 20200174130
    Abstract: A method for a vehicle obtains camera sensor data of a camera of the vehicle. The method further obtains lidar sensor data of a lidar sensor of the vehicle. The method determines information related to a motion of the vehicle. The method determines a combined image of the camera sensor data and the lidar sensor data based on the information related to the motion of the vehicle.
    Type: Application
    Filed: February 3, 2020
    Publication date: June 4, 2020
    Inventors: Koyel BANERJEE, Dominik NOTZ, Johannes WINDELEN
  • Patent number: 10068140
    Abstract: A vehicle movement parameter, such as ego-speed, is estimated using real-time images captured by a single camera. The captured images may be analyzed by a pre-trained convolutional neural network to estimate vehicle movement based on monocular video data. The convolutional neural network may be pre-trained using filters from a synchrony autoencoder that were trained using unlabeled video data captured by the vehicle's camera while the vehicle was in motion. A parameter corresponding to the estimated vehicle movement may be output to the driver or to a driver assistance system for use in controlling the vehicle.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: September 4, 2018
    Assignees: Bayerische Motoren Werke Aktiengesellschaft, NAUTO, Inc.
    Inventors: Ludmila Levkova, Koyel Banerjee
  • Publication number: 20180157918
    Abstract: A vehicle movement parameter, such as ego-speed, is estimated using real-time images captured by a single camera. The captured images may be analyzed by a pre-trained convolutional neural network to estimate vehicle movement based on monocular video data. The convolutional neural network may be pre-trained using filters from a synchrony autoencoder that were trained using unlabeled video data captured by the vehicle's camera while the vehicle was in motion. A parameter corresponding to the estimated vehicle movement may be output to the driver or to a driver assistance system for use in controlling the vehicle.
    Type: Application
    Filed: December 2, 2016
    Publication date: June 7, 2018
    Inventors: Ludmila Levkova, Koyel Banerjee