Patents by Inventor David Nister

David Nister has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200341466
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to generate potential paths for the vehicle to navigate an intersection in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as heat maps corresponding to key points associated with the intersection, vector fields corresponding to directionality, heading, and offsets with respect to lanes, intensity maps corresponding to widths of lanes, and/or classifications corresponding to line segments of the intersection. The outputs may be decoded and/or otherwise post-processed to reconstruct an intersection—or key points corresponding thereto—and to determine proposed or potential paths for navigating the vehicle through the intersection.
    Type: Application
    Filed: April 14, 2020
    Publication date: October 29, 2020
    Inventors: Trung Pham, Hang Dou, Berta Rodriguez Hervas, Minwoo Park, Neda Cvijetic, David Nister
  • Publication number: 20200339109
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 29, 2020
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Publication number: 20200334900
    Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
  • Publication number: 20200294310
    Abstract: A neural network may be used to determine corner points of a skewed polygon (e.g., as displacement values to anchor box corner points) that accurately delineate a region in an image that defines a parking space. Further, the neural network may output confidence values predicting likelihoods that corner points of an anchor box correspond to an entrance to the parking spot. The confidence values may be used to select a subset of the corner points of the anchor box and/or skewed polygon in order to define the entrance to the parking spot. A minimum aggregate distance between corner points of a skewed polygon predicted using the CNN(s) and ground truth corner points of a parking spot may be used simplify a determination as to whether an anchor box should be used as a positive sample for training.
    Type: Application
    Filed: March 16, 2020
    Publication date: September 17, 2020
    Inventors: Dongwoo Lee, Junghyun Kwon, Sangmin Oh, Wenchao Zheng, Hae-Jong Seo, David Nister, Berta Rodriguez Hervas
  • Publication number: 20200293796
    Abstract: In various examples, live perception from sensors of a vehicle may be leveraged to detect and classify intersections in an environment of a vehicle in real-time or near real-time. For example, a deep neural network (DNN) may be trained to compute various outputs—such as bounding box coordinates for intersections, intersection coverage maps corresponding to the bounding boxes, intersection attributes, distances to intersections, and/or distance coverage maps associated with the intersections. The outputs may be decoded and/or post-processed to determine final locations of, distances to, and/or attributes of the detected intersections.
    Type: Application
    Filed: March 10, 2020
    Publication date: September 17, 2020
    Inventors: Sayed Mehdi Sajjadi Mohammadabadi, Berta Rodriguez Hervas, Hang Dou, Igor Tryndin, David Nister, Minwoo Park, Neda Cvijetic, Junghyun Kwon, Trung Pham
  • Publication number: 20200293064
    Abstract: In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
    Type: Application
    Filed: July 17, 2019
    Publication date: September 17, 2020
    Inventors: Yue Wu, Pekka Janis, Xin Tong, Cheng-Chieh Yang, Minwoo Park, David Nister
  • Patent number: 10776983
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 15, 2020
    Assignee: Nvidia Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Patent number: 10769840
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 8, 2020
    Assignee: Nvidia Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20200249684
    Abstract: In various examples, a path perception ensemble is used to produce a more accurate and reliable understanding of a driving surface and/or a path there through. For example, an analysis of a plurality of path perception inputs provides testability and reliability for accurate and redundant lane mapping and/or path planning in real-time or near real-time. By incorporating a plurality of separate path perception computations, a means of metricizing path perception correctness, quality, and reliability is provided by analyzing whether and how much the individual path perception signals agree or disagree. By implementing this approach—where individual path perception inputs fail in almost independent ways—a system failure is less statistically likely. In addition, with diversity and redundancy in path perception, comfortable lane keeping on high curvature roads, under severe road conditions, and/or at complex intersections, as well as autonomous negotiation of turns at intersections, may be enabled.
    Type: Application
    Filed: February 4, 2020
    Publication date: August 6, 2020
    Inventors: Davide Marco Onofrio, Hae-Jong Seo, David Nister, Minwoo Park, Neda Cvijetic
  • Publication number: 20200218979
    Abstract: In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN.
    Type: Application
    Filed: March 9, 2020
    Publication date: July 9, 2020
    Inventors: Junghyun Kwon, Yilin Yang, Bala Siva Sashank Jujjavarapu, Zhaoting Ye, Sangmin Oh, Minwoo Park, David Nister
  • Publication number: 20200210726
    Abstract: In various examples, a deep neural network (DNN) is trained to accurately predict, in deployment, distances to objects and obstacles using image data alone. The DNN may be trained with ground truth data that is generated and encoded using sensor data from any number of depth predicting sensors, such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. Camera adaptation algorithms may be used in various embodiments to adapt the DNN for use with image data generated by cameras with varying parameters—such as varying fields of view. In some examples, a post-processing safety bounds operation may be executed on the predictions of the DNN to ensure that the predictions fall within a safety-permissible range.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 2, 2020
    Inventors: Yilin Yang, Bala Siva Sashank Jujjavarapu, Pekka Janis, Zhaoting Ye, Sangmin Oh, Minwoo Park, Daniel Herrera Castro, Tommi Koivisto, David Nister
  • Publication number: 20200090322
    Abstract: In various examples, a deep neural network (DNN) is trained for sensor blindness detection using a region and context-based approach. Using sensor data, the DNN may compute locations of blindness or compromised visibility regions as well as associated blindness classifications and/or blindness attributes associated therewith. In addition, the DNN may predict a usability of each instance of the sensor data for performing one or more operations—such as operations associated with semi-autonomous or autonomous driving. The combination of the outputs of the DNN may be used to filter out instances of the sensor data—or to filter out portions of instances of the sensor data determined to be compromised—that may lead to inaccurate or ineffective results for the one or more operations of the system.
    Type: Application
    Filed: September 13, 2019
    Publication date: March 19, 2020
    Inventors: Hae-Jong Seo, Abhishek Bajpayee, David Nister, Minwoo Park, Neda Cvijetic
  • Patent number: 10592778
    Abstract: A method of object detection includes receiving a first image taken from a first perspective by a first camera and receiving a second image taken from a second perspective, different from the first perspective, by a second camera. Each pixel in the first image is offset relative to a corresponding pixel in the second image by a predetermined offset distance resulting in offset first and second images. A particular pixel of the offset first image depicts a same object locus as a corresponding pixel in the offset second image only if the object locus is at an expected object-detection distance from the first and second cameras. The method includes recognizing that a target object is imaged by the particular pixel of the offset first image and the corresponding pixel of the offset second image.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: March 17, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
  • Publication number: 20200026960
    Abstract: In various examples, systems and methods are disclosed that preserve rich spatial information from an input resolution of a machine learning model to regress on lines in an input image. The machine learning model may be trained to predict, in deployment, distances for each pixel of the input image at an input resolution to a line pixel determined to correspond to a line in the input image. The machine learning model may further be trained to predict angles and label classes of the line. An embedding algorithm may be used to train the machine learning model to predict clusters of line pixels that each correspond to a respective line in the input image. In deployment, the predictions of the machine learning model may be used as an aid for understanding the surrounding environment—e.g., for updating a world model—in a variety of autonomous machine applications.
    Type: Application
    Filed: July 17, 2019
    Publication date: January 23, 2020
    Inventors: Minwoo Park, Xiaolin Lin, Hae-Jong Seo, David Nister, Neda Cvijetic
  • Publication number: 20190384304
    Abstract: In various examples, a deep learning solution for path detection is implemented to generate a more abstract definition of a drivable path without reliance on explicit lane-markings—by using a detection-based approach. Using approaches of the present disclosure, the identification of drivable paths may be possible in environments where conventional approaches are unreliable, or fail—such as where lane markings do not exist or are occluded. The deep learning solution may generate outputs that represent geometries for one or more drivable paths in an environment and confidence values corresponding to path types or classes that the geometries correspond. These outputs may be directly useable by an autonomous vehicle—such as an autonomous driving software stack—with minimal post-processing.
    Type: Application
    Filed: June 6, 2019
    Publication date: December 19, 2019
    Inventors: Regan Blythe Towal, Maroof Mohammed Farooq, Vijay Chintalapudi, Carolina Parada, David Nister
  • Publication number: 20190265703
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Application
    Filed: February 26, 2019
    Publication date: August 29, 2019
    Inventors: Gary HICOK, Michael COX, Miguel SAINZ, Martin HEMPEL, Ratin KUMAR, Timo ROMAN, Gordon GRIGOR, David NISTER, Justin EBERT, Chin SHIH, Tony TAM, Ruchi BHARGAVA
  • Publication number: 20190266779
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.
    Type: Application
    Filed: July 31, 2018
    Publication date: August 29, 2019
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20190266736
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
    Type: Application
    Filed: July 31, 2018
    Publication date: August 29, 2019
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20190258251
    Abstract: Autonomous driving is one of the world's most challenging computational problems. Very large amounts of data from cameras, RADARs, LIDARs, and HD-Maps must be processed to generate commands to control the car safely and comfortably in real-time. This challenging task requires a dedicated supercomputer that is energy-efficient and low-power, complex high-performance software, and breakthroughs in deep learning AI algorithms. To meet this task, the present technology provides advanced systems and methods that facilitate autonomous driving functionality, including a platform for autonomous driving Levels 3, 4, and/or 5. In preferred embodiments, the technology provides an end-to-end platform with a flexible architecture, including an architecture for autonomous vehicles that leverages computer vision and known ADAS techniques, providing diversity and redundancy, and meeting functional safety standards.
    Type: Application
    Filed: November 9, 2018
    Publication date: August 22, 2019
    Inventors: Michael Alan DITTY, Gary HICOK, Jonathan SWEEDLER, Clement FARABET, Mohammed Abdulla YOUSUF, Tai-Yuen CHAN, Ram GANAPATHI, Ashok SRINIVASAN, Michael Rod TRUOG, Karl GREB, John George MATHIESON, David Nister, Kevin Flory, Daniel Perrin, Dan Hettena
  • Publication number: 20190250622
    Abstract: In various examples, sensor data representative of a field of view of at least one sensor of a vehicle in an environment is received from the at least one sensor. Based at least in part on the sensor data, parameters of an object located in the environment are determined. Trajectories of the object are modeled toward target positions based at least in part on the parameters of the object. From the trajectories, safe time intervals (and/or safe arrival times) over which the vehicle occupying the plurality of target positions would not result in a collision with the object are computed. Based at least in part on the safe time intervals (and/or safe arrival times) and a position of the vehicle in the environment a trajectory for the vehicle may be generated and/or analyzed.
    Type: Application
    Filed: February 7, 2019
    Publication date: August 15, 2019
    Inventors: David Nister, Anton Vorontsov