Patents by Inventor David Nister

David Nister has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11435756
    Abstract: Systems and methods for performing visual odometry more rapidly. Pairs of representations from sensor data (such as images from one or more cameras) are selected, and features common to both representations of the pair are identified. Portions of bundle adjustment matrices that correspond to the pair are updated using the common features. These updates are maintained in register memory until all portions of the matrices that correspond to the pair are updated. By selecting only common features of one particular pair of representations, updated matrix values may be kept in registers. Accordingly, matrix updates for each common feature may be collectively saved with a single write of the registers to other memory. In this manner, fewer write operations are performed from register memory to other memory, thus reducing the time required to update bundle adjustment matrices and thus speeding the bundle adjustment process.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: September 6, 2022
    Assignee: NVIDIA Corporation
    Inventors: Michael Grabner, Jeremy Furtek, David Nister
  • Publication number: 20220277193
    Abstract: An annotation pipeline may be used to produce 2D and/or 3D ground truth data for deep neural networks, such as autonomous or semi-autonomous vehicle perception networks. Initially, sensor data may be captured with different types of sensors and synchronized to align frames of sensor data that represent a similar world state. The aligned frames may be sampled and packaged into a sequence of annotation scenes to be annotated. An annotation project may be decomposed into modular tasks and encoded into a labeling tool, which assigns tasks to labelers and arranges the order of inputs using a wizard that steps through the tasks. During the tasks, each type of sensor data in an annotation scene may be simultaneously presented, and information may be projected across sensor modalities to provide useful contextual information. After all annotation tasks have been completed, the resulting ground truth data may be exported in any suitable format.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Tilman Wekel, Joachim Pehserl, Jacob Meyer, Jake Guza, Anton Mitrokhin, Richard Whitcomb, Marco Scoffier, David Nister, Grant Monroe
  • Publication number: 20220253706
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: April 18, 2022
    Publication date: August 11, 2022
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20220237925
    Abstract: LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping one or more point cloud data points into a depth map, the one or more point cloud data points being generated using one or more sensors; determining one or more mapped point cloud data points within a bounded area of the depth map, and detecting, using one or more processing units and for an environment surrounding a machine corresponding to the one or more sensors, a location of one or more entities based on the one or more mapped point cloud data points.
    Type: Application
    Filed: April 12, 2022
    Publication date: July 28, 2022
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20220135075
    Abstract: In various examples, a safety decomposition architecture for autonomous machine applications is presented that uses two or more individual safety assessments to satisfy a higher safety integrity level (e.g., ASIL D). For example, a behavior planner may be used as a primary planning component, and a collision avoidance feature may be used as a diverse safety monitoring component—such that both may redundantly and independently prevent violation of safety goals. In addition, robustness of the system may be improved as single point and systematic failures may be avoided due to the requirement that two independent failures—e.g., of the behavior planner component and the collision avoidance component—occur simultaneously to cause a violation of the safety goals.
    Type: Application
    Filed: October 8, 2021
    Publication date: May 5, 2022
    Inventors: Julia Ng, Sachin Pullaikudi Veedu, David Nister, Hanne Buur, Hans Jonas Nilsson, Hon Leung Lee, Yunfei Shi, Charles Jerome Vorbach, JR.
  • Publication number: 20220138568
    Abstract: In various examples, reinforcement learning is used to train at least one machine learning model (MLM) to control a vehicle by leveraging a deep neural network (DNN) trained on real-world data by using imitation learning to predict movements of one or more actors to define a world model. The DNN may be trained from real-world data to predict attributes of actors, such as locations and/or movements, from input attributes. The predictions may define states of the environment in a simulator, and one or more attributes of one or more actors input into the DNN may be modified or controlled by the simulator to simulate conditions that may otherwise be unfeasible. The MLM(s) may leverage predictions made by the DNN to predict one or more actions for the vehicle.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Lirui Wang, David Nister, Ollin Boer Bohan, Ishwar Kulkarni, Fangkai Yang, Julia Ng, Alperen Degirmenci, Ruchi Bhargava, Rotem Aviv
  • Patent number: 11308338
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 19, 2022
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Patent number: 11301697
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: April 12, 2022
    Assignee: Nvidia Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20220108465
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Application
    Filed: November 9, 2021
    Publication date: April 7, 2022
    Inventors: Yilin Yang, Bala Siva Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20220092855
    Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
    Type: Application
    Filed: December 6, 2021
    Publication date: March 24, 2022
    Inventors: Dongwoo Lee, Junghyun Kwon, Sangmin Oh, Wenchao Zheng, Hae-Jong Seo, David Nister, Berta Rodriguez Hervas
  • Publication number: 20220019893
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Application
    Filed: September 29, 2021
    Publication date: January 20, 2022
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Patent number: 11195331
    Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: December 7, 2021
    Assignee: NVIDIA Corporation
    Inventors: Dongwoo Lee, Junghyun Kwon, Sangmin Oh, Wenchao Zheng, Hae-Jong Seo, David Nister, Berta Rodriguez Hervas
  • Patent number: 11182916
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20210354729
    Abstract: In various examples, systems and methods are disclosed for weighting one or more optional paths based on obstacle avoidance or other safety considerations. In some embodiments, the obstacle avoidance considerations may be computed using a comparison of trajectories representative of safety procedures at present and future projected time steps of an ego-vehicle and other actors to ensure that each actor is capable of implementing their respective safety procedure while avoiding collisions at any point along the trajectory. This comparison may include filtering out a path(s) of an actor at a time step(s)—e.g., using a one-dimensional lookup—based on spatial relationships between the actor and the ego-vehicle at the time step(s). Where a particular path—or point along the path—does not satisfy a collision-free standard, the path may be penalized more negatively with respect to the obstacle avoidance considerations, or may be removed from consideration as a potential path.
    Type: Application
    Filed: May 18, 2020
    Publication date: November 18, 2021
    Inventors: Julia Ng, David Nister, Zhenyi Zhang, Yizhou Wang
  • Patent number: 11170299
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: November 9, 2021
    Assignee: NVIDIA CORPORATION
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20210342608
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210342609
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210325892
    Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 21, 2021
    Inventors: David Nister, Hon-Leung Lee, Julia Ng, Yizhou Wang
  • Publication number: 20210295171
    Abstract: In various examples, past location information corresponding to actors in an environment and map information may be applied to a deep neural network (DNN)—such as a recurrent neural network (RNN)—trained to compute information corresponding to future trajectories of the actors. The output of the DNN may include, for each future time slice the DNN is trained to predict, a confidence map representing a confidence for each pixel that an actor is present and a vector field representing locations of actors in confidence maps for prior time slices. The vector fields may thus be used to track an object through confidence maps for each future time slice to generate a predicted future trajectory for each actor. The predicted future trajectories, in addition to tracked past trajectories, may be used to generate full trajectories for the actors that may aid an ego-vehicle in navigating the environment.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventors: Alexey Kamenev, Nikolai Smolyanskiy, Ishwar Kulkarni, Ollin Boer Bohan, Fangkai Yang, Alperen Degirmenci, Ruchi Bhargava, Urs Muller, David Nister, Rotem Aviv
  • Publication number: 20210272304
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: December 27, 2019
    Publication date: September 2, 2021
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister