Patents by Inventor Alexey Kamenev

Alexey Kamenev has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230169321
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 1, 2023
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Patent number: 11604967
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: March 14, 2023
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Publication number: 20230013338
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 19, 2023
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20220269271
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: March 11, 2022
    Publication date: August 25, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20220197284
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: March 11, 2022
    Publication date: June 23, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20220138568
    Abstract: In various examples, reinforcement learning is used to train at least one machine learning model (MLM) to control a vehicle by leveraging a deep neural network (DNN) trained on real-world data by using imitation learning to predict movements of one or more actors to define a world model. The DNN may be trained from real-world data to predict attributes of actors, such as locations and/or movements, from input attributes. The predictions may define states of the environment in a simulator, and one or more attributes of one or more actors input into the DNN may be modified or controlled by the simulator to simulate conditions that may otherwise be unfeasible. The MLM(s) may leverage predictions made by the DNN to predict one or more actions for the vehicle.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Lirui Wang, David Nister, Ollin Boer Bohan, Ishwar Kulkarni, Fangkai Yang, Julia Ng, Alperen Degirmenci, Ruchi Bhargava, Rotem Aviv
  • Patent number: 11281221
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: March 22, 2022
    Assignee: Nvidia Corporation
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20210326678
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 21, 2021
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Publication number: 20210295171
    Abstract: In various examples, past location information corresponding to actors in an environment and map information may be applied to a deep neural network (DNN)—such as a recurrent neural network (RNN)—trained to compute information corresponding to future trajectories of the actors. The output of the DNN may include, for each future time slice the DNN is trained to predict, a confidence map representing a confidence for each pixel that an actor is present and a vector field representing locations of actors in confidence maps for prior time slices. The vector fields may thus be used to track an object through confidence maps for each future time slice to generate a predicted future trajectory for each actor. The predicted future trajectories, in addition to tracked past trajectories, may be used to generate full trajectories for the actors that may aid an ego-vehicle in navigating the environment.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventors: Alexey Kamenev, Nikolai Smolyanskiy, Ishwar Kulkarni, Ollin Boer Bohan, Fangkai Yang, Alperen Degirmenci, Ruchi Bhargava, Urs Muller, David Nister, Rotem Aviv
  • Publication number: 20210253128
    Abstract: Embodiments of the present disclosure relate to behavior planning for autonomous vehicles. The technology described herein selects a preferred trajectory for an autonomous vehicle based on an evaluation of multiple hypothetical trajectories by different components within a planning system. The various components provide an optimization score for each trajectory according to the priorities of the component and scores from multiple components may form a final optimization score. This scoring system allows the competing priorities (e.g., comfort, minimal travel time, fuel economy) of different components to be considered together. In examples, the trajectory with the best combined score may be selected for implementation. As such, an iterative approach that evaluates various factors may be used to identify an optimal or preferred trajectory for an autonomous vehicle when navigating an environment.
    Type: Application
    Filed: February 18, 2021
    Publication date: August 19, 2021
    Inventors: David Nister, Yizhou Wang, Julia Ng, Rotem Aviv, Seungho Lee, Joshua John Bialkowski, Hon Leung Lee, Hermes Lanker, Raul Correal Tezanos, Zhenyi Zhang, Nikolai Smolyanskiy, Alexey Kamenev, Ollin Boer Bohan, Anton Vorontsov, Miguel Sainz Serra, Birgit Henke
  • Patent number: 11080590
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: August 3, 2021
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Publication number: 20210026355
    Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
    Type: Application
    Filed: July 24, 2020
    Publication date: January 28, 2021
    Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
  • Publication number: 20200341469
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: July 6, 2020
    Publication date: October 29, 2020
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Patent number: 10705525
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: July 7, 2020
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20190295282
    Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.
    Type: Application
    Filed: March 18, 2019
    Publication date: September 26, 2019
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
  • Publication number: 20180292825
    Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.
    Type: Application
    Filed: March 28, 2018
    Publication date: October 11, 2018
    Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
  • Publication number: 20160012318
    Abstract: A service that performs automatic selection and recommendation of featurization(s) for a provided dataset and machine learning application is described. The service can be a cloud service. Selection/recommendation can cover multiple featurizations that are available for most common raw data formats (e.g., images and text data). Provided a dataset and a task, the service can evaluate different possible featurizations, selecting one or more based on performance, similarity of dataset and task to known datasets with featurizations known to have high predictive accuracy on similar tasks low predictive error, training via learning algorithms to take multiple inputs, etc. The service may include a request-response aspect that provides access to the best featurization selected for the given dataset and task.
    Type: Application
    Filed: December 19, 2014
    Publication date: January 14, 2016
    Inventors: Mikhail Bilenko, Alexey Kamenev, Vijay Narayanan, Peter Taraba
  • Patent number: 9146836
    Abstract: The present invention extends to methods, systems, and computer program products for linking diagnostic visualizations to regions of application code. Diagnostic visualizations emitted during execution of an application are displayed. The diagnostic visualizations partially represent the abstract objective of the application (e.g., as envisioned by a developer). Diagnostic data for at least one of a plurality of components is displayed. The diagnostic data indicates the performance of the at least one of the plurality of components during execution of the application. The displayed one or more diagnostic visualizations and the displayed diagnostic data is correlated to link the one or more diagnostic visualizations to the at least one of the plurality of components. Linking the one or more diagnostic visualizations to the at least one of the plurality of components can better indicate how the application's behavior reconciles the abstract objective.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: September 29, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Rapp, Daniel Griffing, Alexander Dadiomov, Matthew Jacobs, Ben Nesson, Drake A. Campbell, Mayank Agarwal, Paulo Cesar Sales Janotti, Xinhua Ji, Eric Ledoux, Alexey Kamenev, Jared Robert Van Leeuwen
  • Publication number: 20130152052
    Abstract: The present invention extends to methods, systems, and computer program products for linking diagnostic visualizations to regions of application code. Diagnostic visualizations emitted during execution of an application are displayed. The diagnostic visualizations partially represent the abstract objective of the application (e.g., as envisioned by a developer). Diagnostic data for at least one of a plurality of components is displayed. The diagnostic data indicates the performance of the at least one of the plurality of components during execution of the application. The displayed one or more diagnostic visualizations and the displayed diagnostic data is correlated to link the one or more diagnostic visualizations to the at least one of the plurality of components. Linking the one or more diagnostic visualizations to the at least one of the plurality of components can better indicate how the application's behavior reconciles the abstract objective.
    Type: Application
    Filed: December 13, 2011
    Publication date: June 13, 2013
    Applicant: Microsoft Corporation
    Inventors: James Rapp, Daniel Griffing, Alexander Dadiomov, Matthew Jacobs, Ben Nesson, Drake A. Campbell, Mayank Agarwal, Paulo Cesar Sales Janotti, Xinhua Ji, Eric Ledoux, Alexey Kamenev, Jared Robert Van Leeuwen