Patents by Inventor Prasanna SIVAKUMAR

Prasanna SIVAKUMAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11995761
    Abstract: A method of generating virtual sensor data of a virtual single-photon avalanche diode (SPAD) lidar sensor includes generating a two-dimensional (2D) lidar array having a plurality of cells. The method further includes interpolating image data from a virtual camera with the 2D lidar array to define auxiliary image data, generating a virtual ambient image based on a red-channel (R-channel) data of the auxiliary image data, identifying a plurality of virtual echoes of the virtual SPAD lidar sensor based on the R-channel data and a defined photon threshold, defining a virtual point cloud indicative of virtual photon measurements of the virtual SPAD lidar sensor based on the plurality of virtual echoes, and outputting data indicative of the virtual ambient image, the virtual photon measurements, the virtual point cloud, or a combination thereof, as the virtual sensor data of the virtual SPAD lidar sensor.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: May 28, 2024
    Assignees: DENSO CORPORATION, Carnegie Mellon University
    Inventors: Prasanna Sivakumar, Kris Kitani, Matthew O'Toole, Xinshuo Weng, Shawn Hunt, Yunze Man
  • Patent number: 11760412
    Abstract: A method includes obtaining steering data of the vehicle from one or more steering sensors, determining whether the vehicle is operating in one of a turning state and a non-turning state based on the steering data, performing a localization routine based on a first echo set of the 3D data points from among the plurality of 3D data points when the vehicle is operating in the turning state, and performing the localization routine based on a second echo set of the 3D data points from among the plurality of 3D data points when the vehicle is operating in the non-turning state, where a number of the plurality of 3D data points of the first echo set is greater than a number of the plurality of 3D data points of the second echo set.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: September 19, 2023
    Assignee: DENSO CORPORATION
    Inventors: Minglei Huang, Prasanna Sivakumar
  • Publication number: 20230130588
    Abstract: A method includes generating a radar-based intensity map and a lidar-based intensity map and performing one or more augmentation routines on the radar-based intensity map and the lidar-based intensity map to generate a radar input and a lidar input. The method includes generating a plurality of teacher-based bounding boxes and a plurality of student-based bounding boxes based on the radar input and the lidar input. The method includes determining a loss value of the plurality of student-based bounding boxes based on the plurality of teacher-based bounding boxes and a plurality of ground truth bounding boxes, updating one or more weights of the student neural network based on the loss value, and updating one or more weights of the teacher neural network based on a moving average associated with the one or more weights of the student neural network.
    Type: Application
    Filed: June 6, 2022
    Publication date: April 27, 2023
    Applicant: DENSO CORPORATION
    Inventors: Prasanna SIVAKUMAR, Shawn HUNT
  • Publication number: 20230114731
    Abstract: A method of generating virtual sensor data of a virtual single-photon avalanche diode (SPAD) lidar sensor includes generating a two-dimensional (2D) lidar array having a plurality of cells. The method further includes interpolating image data from a virtual camera with the 2D lidar array to define auxiliary image data, generating a virtual ambient image based on a red-channel (R-channel) data of the auxiliary image data, identifying a plurality of virtual echoes of the virtual SPAD lidar sensor based on the R-channel data and a defined photon threshold, defining a virtual point cloud indicative of virtual photon measurements of the virtual SPAD lidar sensor based on the plurality of virtual echoes, and outputting data indicative of the virtual ambient image, the virtual photon measurements, the virtual point cloud, or a combination thereof, as the virtual sensor data of the virtual SPAD lidar sensor.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 13, 2023
    Applicant: DENSO CORPORATION
    Inventors: Prasanna SIVAKUMAR, Kris KITANI, Matthew O'TOOLE, Xinshuo WENG, Shawn HUNT
  • Publication number: 20230112664
    Abstract: A method includes generating a plurality of lidar inputs based on the lidar data, where each lidar input from among the plurality of lidar inputs comprises an image-based portion and a geometric-based portion, and where each lidar input from among the plurality of lidar inputs defines a position coordinate of the one or more objects. The method includes performing, for each lidar input from among the plurality of lidar inputs, a convolutional neural network (CNN) routine based on the image-based portion to generate one or more image-based outputs and assigning the plurality of lidar inputs to a plurality of echo groups based on the geometric-based portion. The method includes concatenating the one or more image-based outputs and the plurality of echo groups to generate a plurality of fused outputs and identifying the one or more objects based on the plurality of fused outputs.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 13, 2023
    Applicant: DENSO CORPORATION
    Inventors: Prasanna SIVAKUMAR, Kris KITANI, Matthew Patrick O'TOOLE, Xinshuo WENG, Shawn HUNT
  • Publication number: 20230116386
    Abstract: A method includes obtaining steering data of the vehicle from one or more steering sensors, determining whether the vehicle is operating in one of a turning state and a non-turning state based on the steering data, performing a localization routine based on a first echo set of the 3D data points from among the plurality of 3D data points when the vehicle is operating in the turning state, and performing the localization routine based on a second echo set of the 3D data points from among the plurality of 3D data points when the vehicle is operating in the non-turning state, where a number of the plurality of 3D data points of the first echo set is greater than a number of the plurality of 3D data points of the second echo set.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 13, 2023
    Applicant: DENSO CORPORATION
    Inventors: Minglei HUANG, Prasanna SIVAKUMAR
  • Publication number: 20230115660
    Abstract: A method of calibrating a camera sensor and a SPAD LiDAR includes extracting identified features in each of a selected camera image and an ambient-intensity (A-I) image, generating a set of keypoints based on the identified features extracted for each of the images to provide a set of 2D camera keypoint locations and a set of 2D A-I keypoint locations, determining matched keypoints based on the set of 2D A-I keypoint locations and the set of 2D camera keypoint locations to provide a set of 2D A-I matched pixel locations and a set of 2D camera matched pixel locations, interpolating a 3D point cloud data with the set of 2D A-I matched pixel locations to obtain a set of 3D LiDAR matched pixel locations, and determining and storing extrinsic parameters to transform the set of 3D LiDAR matched pixel locations with the set of 2D camera matched pixel locations.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 13, 2023
    Applicant: DENSO CORPORATION
    Inventors: Prasanna SIVAKUMAR, Shawn HUNT
  • Publication number: 20220268938
    Abstract: In one embodiment, a method includes receiving sensor data. The sensor data is based on information from a first set of echo points and a second set of echo points. At least one echo point from the first set of echo points and one echo point from the second set of echo points originate from a single beam. The method includes generating a first set of feature maps based on the first set of echo points and a second set of feature maps based on the second set of echo points. The method includes predicting a bounding box for the object based on the first set of feature maps and the second set of feature maps.
    Type: Application
    Filed: February 24, 2021
    Publication date: August 25, 2022
    Inventors: Prasanna Sivakumar, Kris Kitani, Matthew O'Toole, Yunze Man, Xinshuo Weng
  • Publication number: 20220270327
    Abstract: Systems, methods, and other embodiments described herein relate to generating bounding box proposals. In one embodiment, a method includes generating blended 2-dimensional (2D) data based on 2D data and 3-dimensional (3D) data, and generating blended 3D data based on the 2D data and the 3D data. The method includes generating 2D features based on the 2D data and the blended 2D data, generating 3D features based on the 3D data and the blended 3D data, and generating the bounding box proposals based on the 2D features and the 3D features.
    Type: Application
    Filed: February 24, 2021
    Publication date: August 25, 2022
    Inventors: Prasanna Sivakumar, Kris Kitani, Matthew O' Toole, Yunze Man, Xinshuo Weng
  • Patent number: 10761817
    Abstract: In certain embodiments, an instance-specific user interface may be facilitated via entity-associated application metadata. In some embodiments, access information associated with an entity may be provided to one or more servers via a first executable instance of a same user application during a launch of the first executable instance. Based on the access information, application metadata associated with the entity may be obtained via the first executable instance from among a set of application metadata during the launch of the first executable instance, where the application metadata indicates data fields that correspond to data accessible to the entity. Based on the application metadata, the data fields may be loaded for a user interface of the first executable instance during the launch of the first executable instance. One or more of the data fields may be presented via the user interface of the first executable instance.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: September 1, 2020
    Assignee: PERSHING LLC
    Inventors: FathimaFazlina Rahmathali, Akilla Duraiswami, Laxmi Narsimham Vedula, Sridhar Lakshmipathy, Prasanna Sivakumar
  • Publication number: 20200034124
    Abstract: In certain embodiments, an instance-specific user interface may be facilitated via entity-associated application metadata. In some embodiments, access information associated with an entity may be provided to one or more servers via a first executable instance of a same user application during a launch of the first executable instance. Based on the access information, application metadata associated with the entity may be obtained via the first executable instance from among a set of application metadata during the launch of the first executable instance, where the application metadata indicates data fields that correspond to data accessible to the entity. Based on the application metadata, the data fields may be loaded for a user interface of the first executable instance during the launch of the first executable instance. One or more of the data fields may be presented via the user interface of the first executable instance.
    Type: Application
    Filed: October 15, 2018
    Publication date: January 30, 2020
    Inventors: FathimaFazlina RAHMATHALI, Akilla DURAISWAMI, Laxmi Narsimham VEDULA, Sridhar LAKSHMIPATHY, Prasanna SIVAKUMAR