Patents by Inventor Nikolai Smolyanskiy

Nikolai Smolyanskiy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960026
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240096102
    Abstract: Systems and methods are disclosed that relate to freespace detection using machine learning models. First data that may include object labels may be obtained from a first sensor and freespace may be identified using the first data and the object labels. The first data may be annotated to include freespace labels that correspond to freespace within an operational environment. Freespace annotated data may be generated by combining the one or more freespace labels with second data obtained from a second sensor, with the freespace annotated data corresponding to a viewable area in the operational environment. The viewable area may be determined by tracing one or more rays from the second sensor within the field of view of the second sensor relative to the first data. The freespace annotated data may be input into a machine learning model to train the machine learning model to detect freespace using the second data.
    Type: Application
    Filed: August 7, 2023
    Publication date: March 21, 2024
    Inventors: Alexander POPOV, David NISTER, Nikolai SMOLYANSKIY, PATRIK GEBHARDT, Ke CHEN, Ryan OLDJA, Hee Seok LEE, Shane MURRAY, Ruchi BHARGAVA, Tilman WEKEL, Sangmin OH
  • Patent number: 11915493
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20240061075
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 24, 2023
    Publication date: February 22, 2024
    Inventors: Alexander POPOV, Nikolai SMOLYANSKIY, Ryan OLDJA, Shane Murray, Tilman WEKEL, David NISTER, Joachim PEHSERL, Ruchi BHARGAVA, Sangmin OH
  • Publication number: 20240059285
    Abstract: In various examples, techniques for using future trajectory predictions for adaptive cruise control (ACC) are described. For instance, a vehicle may determine a future path(s) of the vehicle and a future path(s) of an object(s). The vehicle may then use a speed profile(s) and the future path(s) to determine a trajectory(ies) for the vehicle. The vehicle may then select a trajectory, such as based on the future path(s) of the object(s). Based on the trajectory, ACC of the vehicle may cause the vehicle to navigate at a speed or a velocity. This way, the vehicle is able to continue using ACC even when the driver makes a maneuver(s) or the system determined to make a maneuver, such as switching lanes or choosing a lane when a road splits.
    Type: Application
    Filed: August 19, 2022
    Publication date: February 22, 2024
    Inventors: Julia Ng, Jian Wei Leong, Nikolai Smolyanskiy, Yizhou Wang, Fangkai Yang, Nianfeng Wan, Chang Liu
  • Patent number: 11885907
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: January 30, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240029447
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 6, 2023
    Publication date: January 25, 2024
    Inventors: Nikolai SMOLYANSKIY, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20230281847
    Abstract: In various examples, methods and systems are provided for estimating depth values for images (e.g., from a monocular sequence). Disclosed approaches may define a search space of potential pixel matches between two images using one or more depth hypothesis planes based at least on a camera pose associated with one or more cameras used to generate the images. A machine learning model(s) may use this search space to predict likelihoods of correspondence between one or more pixels in the images. The predicted likelihoods may be used to compute depth values for one or more of the images. The predicted depth values may be transmitted and used by a machine to perform one or more operations.
    Type: Application
    Filed: February 3, 2022
    Publication date: September 7, 2023
    Inventors: Yiran Zhong, Charles Loop, Nikolai Smolyanskiy, Ke Chen, Stan Birchfield, Alexander Popov
  • Publication number: 20230260136
    Abstract: In various examples, systems and methods of the present disclosure detect and/or track objects in an environment using projection images generated from LiDAR. For example, a machine learning model—such as a deep neural network (DNN)—may be used to compute a motion mask indicative of motion corresponding to points representing objects in an environment. Various input channels may be provided as input to the machine learning model to compute a motion mask. One or more comparison images may be generated based on comparing depth values projected from a current range image to a coordinate space of a previous range image to depth values of the previous range image. The machine learning model may use the one or more projection images, the one or more comparison images, and/or the one or more range images to compute a motion mask and/or a motion vector output representation.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Jens Christian Bo Joergensen, Ollin Boer Bohan, Joachim Pehserl, Nikolai Smolyanskiy
  • Publication number: 20230169321
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 1, 2023
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Patent number: 11604967
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: March 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Publication number: 20230049567
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: October 28, 2022
    Publication date: February 16, 2023
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20230013338
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 19, 2023
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20220415059
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11532168
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11531088
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20220269271
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: March 11, 2022
    Publication date: August 25, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20220197284
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: March 11, 2022
    Publication date: June 23, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20220138568
    Abstract: In various examples, reinforcement learning is used to train at least one machine learning model (MLM) to control a vehicle by leveraging a deep neural network (DNN) trained on real-world data by using imitation learning to predict movements of one or more actors to define a world model. The DNN may be trained from real-world data to predict attributes of actors, such as locations and/or movements, from input attributes. The predictions may define states of the environment in a simulator, and one or more attributes of one or more actors input into the DNN may be modified or controlled by the simulator to simulate conditions that may otherwise be unfeasible. The MLM(s) may leverage predictions made by the DNN to predict one or more actions for the vehicle.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Lirui Wang, David Nister, Ollin Boer Bohan, Ishwar Kulkarni, Fangkai Yang, Julia Ng, Alperen Degirmenci, Ruchi Bhargava, Rotem Aviv
  • Patent number: 11281221
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: March 22, 2022
    Assignee: Nvidia Corporation
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield