Patents by Inventor Ruchi Bhargava

Ruchi Bhargava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960026
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240096102
    Abstract: Systems and methods are disclosed that relate to freespace detection using machine learning models. First data that may include object labels may be obtained from a first sensor and freespace may be identified using the first data and the object labels. The first data may be annotated to include freespace labels that correspond to freespace within an operational environment. Freespace annotated data may be generated by combining the one or more freespace labels with second data obtained from a second sensor, with the freespace annotated data corresponding to a viewable area in the operational environment. The viewable area may be determined by tracing one or more rays from the second sensor within the field of view of the second sensor relative to the first data. The freespace annotated data may be input into a machine learning model to train the machine learning model to detect freespace using the second data.
    Type: Application
    Filed: August 7, 2023
    Publication date: March 21, 2024
    Inventors: Alexander POPOV, David NISTER, Nikolai SMOLYANSKIY, PATRIK GEBHARDT, Ke CHEN, Ryan OLDJA, Hee Seok LEE, Shane MURRAY, Ruchi BHARGAVA, Tilman WEKEL, Sangmin OH
  • Patent number: 11915493
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20240061075
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 24, 2023
    Publication date: February 22, 2024
    Inventors: Alexander POPOV, Nikolai SMOLYANSKIY, Ryan OLDJA, Shane Murray, Tilman WEKEL, David NISTER, Joachim PEHSERL, Ruchi BHARGAVA, Sangmin OH
  • Patent number: 11885907
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: January 30, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240029447
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 6, 2023
    Publication date: January 25, 2024
    Inventors: Nikolai SMOLYANSKIY, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11874663
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Grant
    Filed: August 26, 2022
    Date of Patent: January 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Gary Hicok, Michael Cox, Miguel Sainz, Martin Hempel, Ratin Kumar, Timo Roman, Gordon Grigor, David Nister, Justin Ebert, Chin-Hsien Shih, Tony Tam, Ruchi Bhargava
  • Publication number: 20230366698
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: July 14, 2023
    Publication date: November 16, 2023
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Publication number: 20230357076
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: May 2, 2023
    Publication date: November 9, 2023
    Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Viabhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
  • Patent number: 11788861
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: October 17, 2023
    Assignee: NVIDIA Corporation
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Patent number: 11713978
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: August 1, 2023
    Assignee: NVIDIA Corporation
    Inventors: Amir Akbarzadeh, David Nister, Ruchi Bhargava, Birgit Henke, Ivana Stojanovic, Yu Sheng
  • Patent number: 11698272
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: July 11, 2023
    Assignee: NVIDIA Corporation
    Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
  • Publication number: 20230204383
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams – or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data – corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data – and ultimately a fused high definition (HD) map – that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: February 28, 2023
    Publication date: June 29, 2023
    Inventors: Amir Akbarzadeh, David Nister, Ruchi Bhargava, Birgit Henke, Ivana Stojanovic, Yu Sheng
  • Publication number: 20230049567
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: October 28, 2022
    Publication date: February 16, 2023
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20220415059
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20220413497
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 29, 2022
    Inventors: Gary HICOK, Michael COX, Miguel SAINZ, Martin HEMPEL, Ratin KUMAR, Timo ROMAN, Gordon GRIGOR, David NISTER, Justin EBERT, Chin-Hsien SHIH, Tony TAM, Ruchi BHARGAVA
  • Patent number: 11531088
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Patent number: 11532168
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20220341750
    Abstract: In various examples, health of a high definition (HD) map may be monitored to determine whether inaccuracies exist in one or more layers of the HD map. For example, as one or more vehicles rely on the HD map to traverse portions of an environment, disagreements between perception of the one or more vehicles, map layers of the HD map, and/or other disagreement types may be identified and aggregated. Where errors are identified that indicate a drop in health of the HD map, updated data may be crowdsourced from one or more vehicles corresponding to a location of disagreement within the HD map, and the updated data may be used to update, verify, and validate the HD map.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 27, 2022
    Inventors: Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral
  • Patent number: 11474519
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: October 18, 2022
    Assignee: NVIDIA Corporation
    Inventors: Gary Hicok, Michael Cox, Miguel Sainz, Martin Hempel, Ratin Kumar, Timo Roman, Gordon Grigor, David Nister, Justin Ebert, Chin Shih, Tony Tam, Ruchi Bhargava