Patents by Inventor Nikolai Smolyanskiy
Nikolai Smolyanskiy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210156963Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210150230Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: June 29, 2020Publication date: May 20, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210026355Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.Type: ApplicationFiled: July 24, 2020Publication date: January 28, 2021Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
-
Publication number: 20200341469Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: ApplicationFiled: July 6, 2020Publication date: October 29, 2020Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Patent number: 10705525Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: GrantFiled: March 28, 2018Date of Patent: July 7, 2020Assignee: NVIDIA CorporationInventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Patent number: 10602098Abstract: A vehicle has a camera system that captures video while the vehicle moves. The vehicle records the captured video and/or wirelessly transmits the captured video to a remote user device for playback. When the vehicle is moving, a coarse waypoint is identified and a trajectory is determined from the current location of the vehicle to the coarse waypoint that reduces (e.g., minimizes) sudden changes in direction of movement of the vehicle, reduces (e.g., minimizes) sudden changes in speed of the vehicle, and/or reduces (e.g., minimizes) sudden changes in acceleration of the vehicle by reducing (e.g., minimizing) jerk or snap of the vehicle trajectory. One or more fine waypoints along the trajectory are selected and the vehicle moves to the coarse waypoint along the trajectory by passing through those fine waypoints, resulting in smooth movement of the device that reduces or eliminates motion sickness for users viewing the captured video.Type: GrantFiled: March 18, 2019Date of Patent: March 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi
-
Publication number: 20190295282Abstract: Various examples of the present disclosure include a stereoscopic deep neural network (DNN) that produces accurate and reliable results in real-time. Both LIDAR data (supervised training) and photometric error (unsupervised training) may be used to train the DNN in a semi-supervised manner. The stereoscopic DNN may use an exponential linear unit (ELU) activation function to increase processing speeds, as well as a machine learned argmax function that may include a plurality of convolutional layers having trainable parameters to account for context. The stereoscopic DNN may further include layers having an encoder/decoder architecture, where the encoder portion of the layers may include a combination of three-dimensional convolutional layers followed by two-dimensional convolutional layers.Type: ApplicationFiled: March 18, 2019Publication date: September 26, 2019Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Stan Birchfield
-
Publication number: 20190215495Abstract: A vehicle has a camera system that captures video while the vehicle moves. The vehicle records the captured video and/or wirelessly transmits the captured video to a remote user device for playback. When the vehicle is moving, a coarse waypoint is identified and a trajectory is determined from the current location of the vehicle to the coarse waypoint that reduces (e.g., minimizes) sudden changes in direction of movement of the vehicle, reduces (e.g., minimizes) sudden changes in speed of the vehicle, and/or reduces (e.g., minimizes) sudden changes in acceleration of the vehicle by reducing (e.g., minimizing) jerk or snap of the vehicle trajectory. One or more fine waypoints along the trajectory are selected and the vehicle moves to the coarse waypoint along the trajectory by passing through those fine waypoints, resulting in smooth movement of the device that reduces or eliminates motion sickness for users viewing the captured video.Type: ApplicationFiled: March 18, 2019Publication date: July 11, 2019Inventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi
-
Patent number: 10274737Abstract: A vehicle camera system captures and transmits video to a user device, which includes a viewing device for playback of the captured video, such as virtual reality or augmented reality glasses. A rendering map is generated that indicates which pixels of the video frame (as identified by particular coordinates of the video frame) correspond to which coordinates of a virtual sphere in which a portion of the video frame is rendered for display. When a video frame is received, the rendering map is used to determine the texture values (e.g., colors) for coordinates in the virtual sphere, which is used to generate the display for the user. This technique reduces the rendering time when a user turns his or her head (e.g., while in virtual reality) and so it reduces motion and/or virtual reality sickness induced by the rendering lag.Type: GrantFiled: February 29, 2016Date of Patent: April 30, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Nikolai Smolyanskiy, Zhengyou Zhang, Sean Eron Anderson, Michael Hall
-
Patent number: 10271021Abstract: A vehicle has a camera system that captures video while the vehicle moves. The vehicle records the captured video and/or wirelessly transmits the captured video to a remote user device for playback. When the vehicle is moving, a coarse waypoint is identified and a trajectory is determined from the current location of the vehicle to the coarse waypoint that reduces (e.g., minimizes) sudden changes in direction of movement of the vehicle, reduces (e.g., minimizes) sudden changes in speed of the vehicle, and/or reduces (e.g., minimizes) sudden changes in acceleration of the vehicle by reducing (e.g., minimizing) jerk or snap of the vehicle trajectory. One or more fine waypoints along the trajectory are selected and the vehicle moves to the coarse waypoint along the trajectory by passing through those fine waypoints, resulting in smooth movement of the device that reduces or eliminates motion sickness for users viewing the captured video.Type: GrantFiled: February 29, 2016Date of Patent: April 23, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi
-
Patent number: 10244211Abstract: In embodiments of immersive interactive telepresence, a system includes a vehicle that captures an experience of an environment in which the vehicle travels, and the experience includes audio and video of the environment. User interactive devices receive the audio and the video of the environment, and each of the user interactive devices represent the experience for one or more users who are remote from the environment. A trajectory planner is implemented to route the vehicle based on obstacle avoidance and user travel intent as the vehicle travels in the environment. The trajectory planner can route the vehicle to achieve a location objective in the environment without explicit direction input from a vehicle operator or from the users of the user interactive devices.Type: GrantFiled: February 29, 2016Date of Patent: March 26, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi, Michael Hall
-
Patent number: 10200659Abstract: In embodiments of collaborative camera viewpoint control for interactive telepresence, a system includes a vehicle that travels based on received travel instructions, and the vehicle includes a camera system of multiple cameras that each capture video of an environment in which the vehicle travels from different viewpoints. Viewing devices receive the video of the environment from the different viewpoints, where the video of the environment from a selected one of the viewpoints is displayable to users of the viewing devices. Controller devices that are associated with the viewing devices can each receive a user input as a proposed travel instruction for the vehicle based on the selected viewpoint of the video that is displayed on the viewing devices. A trajectory planner receives the proposed travel instructions initiated via the controller devices, and generates a consensus travel instruction for the vehicle based on the proposed travel instructions.Type: GrantFiled: February 29, 2016Date of Patent: February 5, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi, Michael Hall
-
Publication number: 20180292825Abstract: A method, computer readable medium, and system are disclosed for performing autonomous path navigation using deep neural networks. The method includes the steps of receiving image data at a deep neural network (DNN), determining, by the DNN, both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data, and controlling a location of the vehicle, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path.Type: ApplicationFiled: March 28, 2018Publication date: October 11, 2018Inventors: Nikolai Smolyanskiy, Alexey Kamenev, Jeffrey David Smith, Stanley Thomas Birchfield
-
Publication number: 20170251176Abstract: A vehicle camera system captures and transmits video to a user device, which includes a viewing device for playback of the captured video, such as virtual reality or augmented reality glasses. A rendering map is generated that indicates which pixels of the video frame (as identified by particular coordinates of the video frame) correspond to which coordinates of a virtual sphere in which a portion of the video frame is rendered for display. When a video frame is received, the rendering map is used to determine the texture values (e.g., colors) for coordinates in the virtual sphere, which is used to generate the display for the user. This technique reduces the rendering time when a user turns his or her head (e.g., while in virtual reality) and so it reduces motion and/or virtual reality sickness induced by the rendering lag.Type: ApplicationFiled: February 29, 2016Publication date: August 31, 2017Inventors: Nikolai Smolyanskiy, Zhengyou Zhang, Sean Eron Anderson, Michael Hall
-
Publication number: 20170251181Abstract: In embodiments of immersive interactive telepresence, a system includes a vehicle that captures an experience of an environment in which the vehicle travels, and the experience includes audio and video of the environment. User interactive devices receive the audio and the video of the environment, and each of the user interactive devices represent the experience for one or more users who are remote from the environment. A trajectory planner is implemented to route the vehicle based on obstacle avoidance and user travel intent as the vehicle travels in the environment. The trajectory planner can route the vehicle to achieve a location objective in the environment without explicit direction input from a vehicle operator or from the users of the user interactive devices.Type: ApplicationFiled: February 29, 2016Publication date: August 31, 2017Inventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi, Michael Hall
-
Publication number: 20170251179Abstract: A vehicle has a camera system that captures video while the vehicle moves. The vehicle records the captured video and/or wirelessly transmits the captured video to a remote user device for playback. When the vehicle is moving, a coarse waypoint is identified and a trajectory is determined from the current location of the vehicle to the coarse waypoint that reduces (e.g., minimizes) sudden changes in direction of movement of the vehicle, reduces (e.g., minimizes) sudden changes in speed of the vehicle, and/or reduces (e.g., minimizes) sudden changes in acceleration of the vehicle by reducing (e.g., minimizing) jerk or snap of the vehicle trajectory. One or more fine waypoints along the trajectory are selected and the vehicle moves to the coarse waypoint along the trajectory by passing through those fine waypoints, resulting in smooth movement of the device that reduces or eliminates motion sickness for users viewing the captured video.Type: ApplicationFiled: February 29, 2016Publication date: August 31, 2017Inventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi
-
Publication number: 20170251180Abstract: In embodiments of collaborative camera viewpoint control for interactive telepresence, a system includes a vehicle that travels based on received travel instructions, and the vehicle includes a camera system of multiple cameras that each capture video of an environment in which the vehicle travels from different viewpoints. Viewing devices receive the video of the environment from the different viewpoints, where the video of the environment from a selected one of the viewpoints is displayable to users of the viewing devices. Controller devices that are associated with the viewing devices can each receive a user input as a proposed travel instruction for the vehicle based on the selected viewpoint of the video that is displayed on the viewing devices. A trajectory planner receives the proposed travel instructions initiated via the controller devices, and generates a consensus travel instruction for the vehicle based on the proposed travel instructions.Type: ApplicationFiled: February 29, 2016Publication date: August 31, 2017Inventors: Nikolai Smolyanskiy, Zhengyou Zhang, Vikram R. Dendi, Michael Hall