Patents by Inventor Nikolai Smolyanskiy
Nikolai Smolyanskiy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230049567Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: October 28, 2022Publication date: February 16, 2023Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20230013338Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: ApplicationFiled: June 30, 2022Publication date: January 19, 2023Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Publication number: 20220415059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 25, 2022Publication date: December 29, 2022Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11531088Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: GrantFiled: March 31, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Patent number: 11532168Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20220269271Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: ApplicationFiled: March 11, 2022Publication date: August 25, 2022Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Publication number: 20220197284Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: ApplicationFiled: March 11, 2022Publication date: June 23, 2022Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Publication number: 20220138568Abstract: In various examples, reinforcement learning is used to train at least one machine learning model (MLM) to control a vehicle by leveraging a deep neural network (DNN) trained on real-world data by using imitation learning to predict movements of one or more actors to define a world model. The DNN may be trained from real-world data to predict attributes of actors, such as locations and/or movements, from input attributes. The predictions may define states of the environment in a simulator, and one or more attributes of one or more actors input into the DNN may be modified or controlled by the simulator to simulate conditions that may otherwise be unfeasible. The MLM(s) may leverage predictions made by the DNN to predict one or more actions for the vehicle.Type: ApplicationFiled: November 1, 2021Publication date: May 5, 2022Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Lirui Wang, David Nister, Ollin Boer Bohan, Ishwar Kulkarni, Fangkai Yang, Julia Ng, Alperen Degirmenci, Ruchi Bhargava, Rotem Aviv
-
Patent number: 11281221Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: GrantFiled: July 6, 2020Date of Patent: March 22, 2022Assignee: Nvidia CorporationInventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Publication number: 20210342609Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210342608Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210326678Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.Type: ApplicationFiled: June 23, 2021Publication date: October 21, 2021Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
-
Publication number: 20210295171Abstract: In various examples, past location information corresponding to actors in an environment and map information may be applied to a deep neural network (DNN)—such as a recurrent neural network (RNN)—trained to compute information corresponding to future trajectories of the actors. The output of the DNN may include, for each future time slice the DNN is trained to predict, a confidence map representing a confidence for each pixel that an actor is present and a vector field representing locations of actors in confidence maps for prior time slices. The vector fields may thus be used to track an object through confidence maps for each future time slice to generate a predicted future trajectory for each actor. The predicted future trajectories, in addition to tracked past trajectories, may be used to generate full trajectories for the actors that may aid an ego-vehicle in navigating the environment.Type: ApplicationFiled: March 19, 2020Publication date: September 23, 2021Inventors: Alexey Kamenev, Nikolai Smolyanskiy, Ishwar Kulkarni, Ollin Boer Bohan, Fangkai Yang, Alperen Degirmenci, Ruchi Bhargava, Urs Muller, David Nister, Rotem Aviv
-
Publication number: 20210253128Abstract: Embodiments of the present disclosure relate to behavior planning for autonomous vehicles. The technology described herein selects a preferred trajectory for an autonomous vehicle based on an evaluation of multiple hypothetical trajectories by different components within a planning system. The various components provide an optimization score for each trajectory according to the priorities of the component and scores from multiple components may form a final optimization score. This scoring system allows the competing priorities (e.g., comfort, minimal travel time, fuel economy) of different components to be considered together. In examples, the trajectory with the best combined score may be selected for implementation. As such, an iterative approach that evaluates various factors may be used to identify an optimal or preferred trajectory for an autonomous vehicle when navigating an environment.Type: ApplicationFiled: February 18, 2021Publication date: August 19, 2021Inventors: David Nister, Yizhou Wang, Julia Ng, Rotem Aviv, Seungho Lee, Joshua John Bialkowski, Hon Leung Lee, Hermes Lanker, Raul Correal Tezanos, Zhenyi Zhang, Nikolai Smolyanskiy, Alexey Kamenev, Ollin Boer Bohan, Anton Vorontsov, Miguel Sainz Serra, Birgit Henke
-
Patent number: 11080590Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.Type: GrantFiled: March 18, 2019Date of Patent: August 3, 2021Assignee: NVIDIA CorporationInventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
-
Publication number: 20210156960Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210156963Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210150230Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: June 29, 2020Publication date: May 20, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210026355Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.Type: ApplicationFiled: July 24, 2020Publication date: January 28, 2021Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
-
Publication number: 20200341469Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: ApplicationFiled: July 6, 2020Publication date: October 29, 2020Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield