Patents by Inventor Andrei Pokrovsky

Andrei Pokrovsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240085908
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 14, 2024
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Patent number: 11860629
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: January 2, 2024
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20230359202
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Inventors: Raquel Urtasun, Yen-Chen Lin, Andrei Pokrovsky, Mengye Ren, Abbas Sadat, Ersin Yumer
  • Patent number: 11755014
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: September 12, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 11726208
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 15, 2023
    Assignee: UATC, LLC
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
  • Publication number: 20210325882
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: June 30, 2021
    Publication date: October 21, 2021
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Patent number: 11061402
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: July 13, 2021
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Publication number: 20210200212
    Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.
    Type: Application
    Filed: March 20, 2020
    Publication date: July 1, 2021
    Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
  • Patent number: 10768628
    Abstract: Systems and methods are directed to object detection at various ranges for autonomous vehicles. In one example, a system includes a camera providing a first field of view; a machine-learned model that has been trained to generate object detection range estimates based at least in part on labeled training data representing image data having a second field of view different from the first field of view; and a computing system including one or more processors; and memory including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining image data from the camera.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: September 8, 2020
    Assignee: UATC, LLC
    Inventors: R. Lance Martin, Zac Vawter, Andrei Pokrovsky
  • Publication number: 20190383945
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 19, 2019
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
  • Publication number: 20190179327
    Abstract: Systems and methods are directed to object detection at various ranges for autonomous vehicles. In one example, a system includes a camera providing a first field of view; a machine-learned model that has been trained to generate object detection range estimates based at least in part on labeled training data representing image data having a second field of view different from the first field of view; and a computing system including one or more processors; and memory including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining image data from the camera.
    Type: Application
    Filed: May 30, 2018
    Publication date: June 13, 2019
    Inventors: R. Lance Martin, Zac Vawter, Andrei Pokrovsky
  • Publication number: 20190146497
    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.
    Type: Application
    Filed: February 7, 2018
    Publication date: May 16, 2019
    Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
  • Patent number: 9087398
    Abstract: Methods of compressing (and decompressing) bounding box data and a processor incorporating one or more of the methods. In one embodiment, a method of compressing such data includes: (1) generating dimension-specific multiplicands and a floating-point shared scale multiplier from floating-point numbers representing extents of the bounding box and (2) substituting portions of floating-point numbers representing a reference point of the bounding box with the dimension-specific multiplicands to yield floating-point packed boundary box descriptors, the floating-point shared scale multiplier and the floating-point packed boundary box descriptors together constituting compressed bounding box data.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: July 21, 2015
    Assignee: NVIDIA CORPORATION
    Inventor: Andrei Pokrovsky
  • Publication number: 20140160151
    Abstract: Methods of compressing (and decompressing) bounding box data and a processor incorporating one or more of the methods. In one embodiment, a method of compressing such data includes: (1) generating dimension-specific multiplicands and a floating-shared scale multiplier from floating-point numbers representing extents of the bounding box and (2) substituting portions of floating-point numbers representing a reference point of the bounding box with the dimension-specific multiplicands to yield floating-point packed boundary box descriptors, the floating-point shared scale multiplier and the floating-point packed boundary box descriptors together constituting compressed bounding box data.
    Type: Application
    Filed: December 6, 2012
    Publication date: June 12, 2014
    Applicant: NVIDIA CORPORATION
    Inventor: Andrei Pokrovsky