Patents by Inventor David Nistér

David Nistér has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135173
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: June 27, 2023
    Publication date: April 25, 2024
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11966228
    Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: David Nister, Hon-Leung Lee, Julia Ng, Yizhou Wang
  • Publication number: 20240127454
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
    Type: Application
    Filed: December 20, 2023
    Publication date: April 18, 2024
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
  • Patent number: 11960026
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240116538
    Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
    Type: Application
    Filed: December 19, 2023
    Publication date: April 11, 2024
    Inventors: Zhenyi Zhang, Yizhou Wang, David Nister, Neda Cvijetic
  • Publication number: 20240111025
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: December 6, 2023
    Publication date: April 4, 2024
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Publication number: 20240101118
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
    Type: Application
    Filed: December 12, 2023
    Publication date: March 28, 2024
    Inventors: Sayed Mehdi Sajjadi Mohammadabadi, Berta Rodriguez Hervas, Hang Dou, Igor Tryndin, David Nister, Minwoo Park, Neda Cvijetic, Junghyun Kwon, Trung Pham
  • Patent number: 11941819
    Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: March 26, 2024
    Assignee: NVIDIA Corporation
    Inventors: Dongwoo Lee, Junghyun Kwon, Sangmin Oh, Wenchao Zheng, Hae-Jong Seo, David Nister, Berta Rodriguez Hervas
  • Publication number: 20240096102
    Abstract: Systems and methods are disclosed that relate to freespace detection using machine learning models. First data that may include object labels may be obtained from a first sensor and freespace may be identified using the first data and the object labels. The first data may be annotated to include freespace labels that correspond to freespace within an operational environment. Freespace annotated data may be generated by combining the one or more freespace labels with second data obtained from a second sensor, with the freespace annotated data corresponding to a viewable area in the operational environment. The viewable area may be determined by tracing one or more rays from the second sensor within the field of view of the second sensor relative to the first data. The freespace annotated data may be input into a machine learning model to train the machine learning model to detect freespace using the second data.
    Type: Application
    Filed: August 7, 2023
    Publication date: March 21, 2024
    Inventors: Alexander POPOV, David NISTER, Nikolai SMOLYANSKIY, PATRIK GEBHARDT, Ke CHEN, Ryan OLDJA, Hee Seok LEE, Shane MURRAY, Ruchi BHARGAVA, Tilman WEKEL, Sangmin OH
  • Patent number: 11927502
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Patent number: 11926346
    Abstract: In various examples, a yield scenario may be identified for a first vehicle. A wait element is received that encodes a first path for the first vehicle to traverse a yield area and a second path for a second vehicle to traverse the yield area. The first path is employed to determine a first trajectory in the yield area for the first vehicle based at least on a first location of the first vehicle at a time and the second path is employed to determine a second trajectory in the yield area for the second vehicle based at least on a second location of the second vehicle at the time. To operate the first vehicle in accordance with a wait state, it may be determined whether there is a conflict between the first trajectory and the second trajectory, where the wait state defines a yielding behavior for the first vehicle.
    Type: Grant
    Filed: August 5, 2021
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Fangkai Yang, David Nister, Yizhou Wang, Rotem Aviv, Julia Ng, Birgit Henke, Hon Leung Lee, Yunfei Shi
  • Patent number: 11928822
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
  • Patent number: 11921502
    Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.
    Type: Grant
    Filed: January 6, 2023
    Date of Patent: March 5, 2024
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
  • Patent number: 11915493
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20240062657
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Application
    Filed: October 20, 2023
    Publication date: February 22, 2024
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Publication number: 20240061075
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 24, 2023
    Publication date: February 22, 2024
    Inventors: Alexander POPOV, Nikolai SMOLYANSKIY, Ryan OLDJA, Shane Murray, Tilman WEKEL, David NISTER, Joachim PEHSERL, Ruchi BHARGAVA, Sangmin OH
  • Patent number: 11906660
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: February 20, 2024
    Assignee: NVIDIA Corporation
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Patent number: 11908203
    Abstract: LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. Improved techniques for processing the point cloud data that has been collected are provided. The improved techniques include mapping one or more point cloud data points into a depth map, the one or more point cloud data points being generated using one or more sensors; determining one or more mapped point cloud data points within a bounded area of the depth map, and detecting, using one or more processing units and for an environment surrounding a machine corresponding to the one or more sensors, a location of one or more entities based on the one or more mapped point cloud data points.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: February 20, 2024
    Assignee: NVIDIA Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20240053749
    Abstract: To determine a path through a pose configuration space, trajectories of poses may be evaluated in parallel based at least on translating the trajectories along at least one axis of the pose configuration space (e.g., an orientation axis). A trajectory may include at least a portion of a turn having a fixed turn radius. Turns or turn portions that have the same turn radius and initial orientation can be translatively shifted along and processed in parallel along the orientation axis as they are translated copies of each other, but with different starting points. Trajectories may be evaluated based at least on processing variables used to evaluate reachability as bit vectors with threads effectively performing large vector operations in synchronization. A parallel reduction pattern may be used to account for dependencies that may exist between sections of a trajectory for evaluating reachability, allowing for the sections to be processed in parallel.
    Type: Application
    Filed: October 25, 2023
    Publication date: February 15, 2024
    Inventors: David Nister, Yizhou Wang, Jaikrishna Soundararajan, Sachit Kadle
  • Patent number: 11897471
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
    Type: Grant
    Filed: January 31, 2023
    Date of Patent: February 13, 2024
    Assignee: NVIDIA Corporation
    Inventors: Sayed Mehdi Sajjadi Mohammadabadi, Berta Rodriguez Hervas, Hang Dou, Igor Tryndin, David Nister, Minwoo Park, Neda Cvijetic, Junghyun Kwon, Trung Pham