Patents by Inventor David Nister

David Nister has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230049567
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: October 28, 2022
    Publication date: February 16, 2023
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Patent number: 11579629
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: February 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Publication number: 20230037767
    Abstract: In various examples, a yield scenario may be identified for a first vehicle. A wait element is received that encodes a first path for the first vehicle to traverse a yield area and a second path for a second vehicle to traverse the yield area. The first path is employed to determine a first trajectory in the yield area for the first vehicle based at least on a first location of the first vehicle at a time and the second path is employed to determine a second trajectory in the yield area for the second vehicle based at least on a second location of the second vehicle at the time. To operate the first vehicle in accordance with a wait state, it may be determined whether there is a conflict between the first trajectory and the second trajectory, where the wait state defines a yielding behavior for the first vehicle.
    Type: Application
    Filed: August 5, 2021
    Publication date: February 9, 2023
    Inventors: Fangkai Yang, David Nister, Yizhou Wang, Rotem Aviv, Julia Ng, Birgit Henke, Hon Leung Lee, Yunfei Shi
  • Publication number: 20230012645
    Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
    Type: Application
    Filed: September 26, 2022
    Publication date: January 19, 2023
    Inventors: Hae-Jong Seo, Abhishek Bajpayee, David Nister, Minwoo Park, Neda Cvijetic
  • Publication number: 20230004164
    Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
    Type: Application
    Filed: September 8, 2022
    Publication date: January 5, 2023
    Inventors: Davide Marco Onofrio, Hae-Jong Seo, David Nister, Minwoo Park, Neda Cvijetic
  • Publication number: 20220413509
    Abstract: Systems and methods for performing visual odometry more rapidly. Pairs of representations from sensor data (such as images from one or more cameras) are selected, and features common to both representations of the pair are identified. Portions of bundle adjustment matrices that correspond to the pair are updated using the common features. These updates are maintained in register memory until all portions of the matrices that correspond to the pair are updated. By selecting only common features of one particular pair of representations, updated matrix values may be kept in registers. Accordingly, matrix updates for each common feature may be collectively saved with a single write of the registers to other memory. In this manner, fewer write operations are performed from register memory to other memory, thus reducing the time required to update bundle adjustment matrices and thus speeding the bundle adjustment process.
    Type: Application
    Filed: August 31, 2022
    Publication date: December 29, 2022
    Inventors: Michael Grabner, Jeremy Furtek, David Nister
  • Publication number: 20220415059
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20220413497
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 29, 2022
    Inventors: Gary HICOK, Michael COX, Miguel SAINZ, Martin HEMPEL, Ratin KUMAR, Timo ROMAN, Gordon GRIGOR, David NISTER, Justin EBERT, Chin-Hsien SHIH, Tony TAM, Ruchi BHARGAVA
  • Publication number: 20220404829
    Abstract: To determine a path through a pose configuration space, trajectories of poses may be evaluated in parallel based at least on translating the trajectories along at least one axis of the pose configuration space (e.g., an orientation axis). A trajectory may include at least a portion of a turn having a fixed turn radius. Turns or turn portions that have the same turn radius and initial orientation can be translatively shifted along and processed in parallel along the orientation axis as they are translated copies of each other, but with different starting points. Trajectories may be evaluated based at least on processing variables used to evaluate reachability as bit vectors with threads effectively performing large vector operations in synchronization. A parallel reduction pattern may be used to account for dependencies that may exist between sections of a trajectory for evaluating reachability, allowing for the sections to be processed in parallel.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 22, 2022
    Inventors: David Nister, Yizhou Wang, Jaikrishna Soundararajan, Sachit Kadle
  • Patent number: 11531088
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Patent number: 11532168
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11520345
    Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
    Type: Grant
    Filed: February 4, 2020
    Date of Patent: December 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Davide Marco Onofrio, Hae-Jong Seo, David Nister, Minwoo Park, Neda Cvijetic
  • Publication number: 20220379917
    Abstract: A trajectory for an autonomous machine may be evaluated for safety based at least on determining whether the autonomous machine would be capable of occupying points of the trajectory in space-time while still being able to avoid a potential future collision with one or more objects in the environment through use of one or more safety procedures. To do so, a point of the trajectory may be evaluated for conflict based at least on a comparison between points in space-time that correspond to the autonomous machine executing the safety procedure(s) from the point and arrival times of the one or more objects to corresponding position(s) in the environment. A trajectory may be sampled and evaluated for conflicts at various points throughout the trajectory. Based on results of one or more evaluations, the trajectory may be scored, eliminated from consideration, or otherwise considered for control of the autonomous machine.
    Type: Application
    Filed: May 24, 2021
    Publication date: December 1, 2022
    Inventors: Birgit Henke, David Nister, Julia Ng
  • Patent number: 11508049
    Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: November 22, 2022
    Assignee: NVIDIA Corporation
    Inventors: Hae-Jong Seo, Abhishek Bajpayee, David Nister, Minwoo Park, Neda Cvijetic
  • Publication number: 20220349725
    Abstract: In various examples, a high definition (HD) map is provided that includes a segmented data structure that allows for selective access to desired road segments and corresponding layers of map data. For example, the HD map may be segmented into a series of tiles that may correspond to a geographic region, and each of the tiles may include any number of road segments corresponding to portions of the geographic region. Each road segment may include a corresponding set of layers—which may include driving layers for use by the ego-machine and/or training layers for generating ground truth data—from the HD map that are associated with the road segment alone. As such, when traversing the environment, an ego-machine may determine one or more road segments within a tile corresponding to a current location, and may selectively download one or more layers for each of the one or more road segments.
    Type: Application
    Filed: April 21, 2022
    Publication date: November 3, 2022
    Inventors: Russell Chreptyk, Vaibhav Thukral, David Nister
  • Publication number: 20220351524
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
    Type: Application
    Filed: July 13, 2022
    Publication date: November 3, 2022
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic
  • Publication number: 20220340149
    Abstract: In various examples, an end-to-end perception evaluation system for autonomous and semi-autonomous machine applications may be implemented to evaluate how the accuracy or precision of outputs of machine learning models—such as deep neural networks (DNNs)—impact downstream performance of the machine when relied upon. For example, decisions computed by the system using ground truth output types may be compared to decisions computed by the system using the perception outputs. As a result, discrepancies in downstream decision making of the system between the ground truth information and the perception information may be evaluated to either aid in updating or retraining of the machine learning model or aid in generating more accurate or precise ground truth information.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 27, 2022
    Inventors: David Nister, Cheng-Chieh Yang, Yue Wu
  • Patent number: 11474519
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: October 18, 2022
    Assignee: NVIDIA Corporation
    Inventors: Gary Hicok, Michael Cox, Miguel Sainz, Martin Hempel, Ratin Kumar, Timo Roman, Gordon Grigor, David Nister, Justin Ebert, Chin Shih, Tony Tam, Ruchi Bhargava
  • Publication number: 20220301186
    Abstract: In various examples, an ego-machine may analyze sensor data to identify and track features in the sensor data using. Geometry of the tracked features may be used to analyze motion flow to determine whether the motion flow violates one or more geometrical constraints. As such, tracked features may be identified as dynamic features when the motion flow corresponding to the tracked features violates the one or more static constraints for static features. Tracked features that are determined to be dynamic features may be clustered together according to their location and feature track. Once features have been clustered together, the system may calculate a detection bounding shape for the clustered features. The bounding shape information may then be used by the ego-machine for path planning, control decisions, obstacle avoidance, and/or other operations.
    Type: Application
    Filed: February 23, 2022
    Publication date: September 22, 2022
    Inventors: David Nister, Soohwan Kim, Yue Wu, Minwoo Park, Cheng-Chieh Yang
  • Patent number: 11436837
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersection contention areas in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute outputs—such as signed distance functions—that may correspond to locations of boundaries delineating intersection contention areas. The signed distance functions may be decoded and/or post-processed to determine instance segmentation masks representing locations and classifications of intersection areas or regions. The locations of the intersections areas or regions may be generated in image-space and converted to world-space coordinates to aid an autonomous or semi-autonomous vehicle in navigating intersections according to rules of the road, traffic priority considerations, and/or the like.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: September 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Trung Pham, Berta Rodriguez Hervas, Minwoo Park, David Nister, Neda Cvijetic