Patents by Inventor David Nister

David Nister has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210233307
    Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
    Type: Application
    Filed: April 12, 2021
    Publication date: July 29, 2021
    Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
  • Publication number: 20210197858
    Abstract: In various examples, sensor data may be collected using one or more sensors of an ego-vehicle to generate a representation of an environment surrounding the ego-vehicle. The representation may include lanes of the roadway and object locations within the lanes. The representation of the environment may be provided as input to a longitudinal speed profile identifier, which may project a plurality of longitudinal speed profile candidates onto a target lane. Each of the plurality of longitudinal speed profiles candidates may be evaluated one or more times based on one or more sets of criteria. Using scores from the evaluation, a target gap and a particular longitudinal speed profile from the longitudinal speed profile candidates may be selected. Once the longitudinal speed profile for a target gap has been determined, the system may execute a lane change maneuver according to the longitudinal speed profile.
    Type: Application
    Filed: December 22, 2020
    Publication date: July 1, 2021
    Inventors: Zhenyi Zhang, Yizhou Wang, David Nister, Neda Cvijetic
  • Publication number: 20210201145
    Abstract: In various examples, a three-dimensional (3D) intersection structure may be predicted using a deep neural network (DNN) based on processing two-dimensional (2D) input data. To train the DNN to accurately predict 3D intersection structures from 2D inputs, the DNN may be trained using a first loss function that compares 3D outputs of the DNN—after conversion to 2D space—to 2D ground truth data and a second loss function that analyzes the 3D predictions of the DNN in view of one or more geometric constraints—e.g., geometric knowledge of intersections may be used to penalize predictions of the DNN that do not align with known intersection and/or road structure geometries. As such, live perception of an autonomous or semi-autonomous vehicle may be used by the DNN to detect 3D locations of intersection structures from 2D inputs.
    Type: Application
    Filed: December 9, 2020
    Publication date: July 1, 2021
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
  • Publication number: 20210165418
    Abstract: Systems and methods for performing visual odometry more rapidly. Pairs of representations from sensor data (such as images from one or more cameras) are selected, and features common to both representations of the pair are identified. Portions of bundle adjustment matrices that correspond to the pair are updated using the common features. These updates are maintained in register memory until all portions of the matrices that correspond to the pair are updated. By selecting only common features of one particular pair of representations, updated matrix values may be kept in registers. Accordingly, matrix updates for each common feature may be collectively saved with a single write of the registers to other memory. In this manner, fewer write operations are performed from register memory to other memory, thus reducing the time required to update bundle adjustment matrices and thus speeding the bundle adjustment process.
    Type: Application
    Filed: December 1, 2020
    Publication date: June 3, 2021
    Inventors: Michael Grabner, Jeremy Furtek, David Nister
  • Publication number: 20210156963
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: March 31, 2020
    Publication date: May 27, 2021
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20210156960
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: March 31, 2020
    Publication date: May 27, 2021
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20210150230
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: June 29, 2020
    Publication date: May 20, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 10997435
    Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: May 4, 2021
    Assignee: NVIDIA Corporation
    Inventors: Josh Abbott, Miguel Sainz Serra, Zhaoting Ye, David Nister
  • Patent number: 10991155
    Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 27, 2021
    Assignee: NVIDIA Corporation
    Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
  • Publication number: 20210063200
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
  • Publication number: 20210063198
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Publication number: 20210063578
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Publication number: 20210063199
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: Amir Akbarzadeh, David Nister, Ruchi Bhargava, Birgit Henke, Ivana Stojanovic, Yu Sheng
  • Publication number: 20210042535
    Abstract: In various examples, object fence corresponding to objects detected by an ego-vehicle may be used to determine overlap of the object fences with lanes on a driving surface. A lane mask may be generated corresponding to the lanes on the driving surface, and the object fences may be compared to the lanes of the lane mask to determine the overlap. Where an object fence is located in more than one lane, a boundary scoring approach may be used to determine a ratio of overlap of the boundary fence, and thus the object, with each of the lanes. The overlap with one or more lanes for each object may be used to determine lane assignments for the objects, and the lane assignments may be used by the ego-vehicle to determine a path or trajectory along the driving surface.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 11, 2021
    Inventors: Josh Abbott, Miguel Sainz Serra, Zhaoting Ye, David Nister
  • Publication number: 20210026355
    Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
    Type: Application
    Filed: July 24, 2020
    Publication date: January 28, 2021
    Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
  • Publication number: 20200410254
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
    Type: Application
    Filed: June 24, 2020
    Publication date: December 31, 2020
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
  • Publication number: 20200357160
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20200339109
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 29, 2020
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Publication number: 20200341466
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 29, 2020
    Inventors: Trung Pham, Hang Dou, Berta Rodriguez Hervas, Minwoo Park, Neda Cvijetic, David Nister
  • Publication number: 20200334900
    Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden