Patents by Inventor Andrei Pokrovsky
Andrei Pokrovsky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240085908Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.Type: ApplicationFiled: November 17, 2023Publication date: March 14, 2024Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
-
Patent number: 11860629Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.Type: GrantFiled: June 30, 2021Date of Patent: January 2, 2024Assignee: UATC, LLCInventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
-
Publication number: 20230359202Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.Type: ApplicationFiled: July 19, 2023Publication date: November 9, 2023Inventors: Raquel Urtasun, Yen-Chen Lin, Andrei Pokrovsky, Mengye Ren, Abbas Sadat, Ersin Yumer
-
Patent number: 11755014Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.Type: GrantFiled: March 20, 2020Date of Patent: September 12, 2023Assignee: UATC, LLCInventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
-
Patent number: 11726208Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.Type: GrantFiled: June 11, 2019Date of Patent: August 15, 2023Assignee: UATC, LLCInventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
-
Publication number: 20210325882Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.Type: ApplicationFiled: June 30, 2021Publication date: October 21, 2021Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
-
Patent number: 11061402Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.Type: GrantFiled: February 7, 2018Date of Patent: July 13, 2021Assignee: UATC, LLCInventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
-
Publication number: 20210200212Abstract: Systems and methods for generating motion plans for autonomous vehicles are provided. An autonomous vehicle can include a machine-learned motion planning system including one or more machine-learned models configured to generate target trajectories for the autonomous vehicle. The model(s) include a behavioral planning stage configured to receive situational data based at least in part on the one or more outputs of the set of sensors and to generate behavioral planning data based at least in part on the situational data and a unified cost function. The model(s) includes a trajectory planning stage configured to receive the behavioral planning data from the behavioral planning stage and to generate target trajectory data for the autonomous vehicle based at least in part on the behavioral planning data and the unified cost function.Type: ApplicationFiled: March 20, 2020Publication date: July 1, 2021Inventors: Raquel Urtasun, Abbas Sadat, Mengye Ren, Andrei Pokrovsky, Yen-Chen Lin, Ersin Yumer
-
Patent number: 10768628Abstract: Systems and methods are directed to object detection at various ranges for autonomous vehicles. In one example, a system includes a camera providing a first field of view; a machine-learned model that has been trained to generate object detection range estimates based at least in part on labeled training data representing image data having a second field of view different from the first field of view; and a computing system including one or more processors; and memory including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining image data from the camera.Type: GrantFiled: May 30, 2018Date of Patent: September 8, 2020Assignee: UATC, LLCInventors: R. Lance Martin, Zac Vawter, Andrei Pokrovsky
-
Publication number: 20190383945Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.Type: ApplicationFiled: June 11, 2019Publication date: December 19, 2019Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
-
Publication number: 20190179327Abstract: Systems and methods are directed to object detection at various ranges for autonomous vehicles. In one example, a system includes a camera providing a first field of view; a machine-learned model that has been trained to generate object detection range estimates based at least in part on labeled training data representing image data having a second field of view different from the first field of view; and a computing system including one or more processors; and memory including instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations include obtaining image data from the camera.Type: ApplicationFiled: May 30, 2018Publication date: June 13, 2019Inventors: R. Lance Martin, Zac Vawter, Andrei Pokrovsky
-
Publication number: 20190146497Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.Type: ApplicationFiled: February 7, 2018Publication date: May 16, 2019Inventors: Raquel Urtasun, Mengye Ren, Andrei Pokrovsky, Bin Yang
-
Patent number: 9087398Abstract: Methods of compressing (and decompressing) bounding box data and a processor incorporating one or more of the methods. In one embodiment, a method of compressing such data includes: (1) generating dimension-specific multiplicands and a floating-point shared scale multiplier from floating-point numbers representing extents of the bounding box and (2) substituting portions of floating-point numbers representing a reference point of the bounding box with the dimension-specific multiplicands to yield floating-point packed boundary box descriptors, the floating-point shared scale multiplier and the floating-point packed boundary box descriptors together constituting compressed bounding box data.Type: GrantFiled: December 6, 2012Date of Patent: July 21, 2015Assignee: NVIDIA CORPORATIONInventor: Andrei Pokrovsky
-
Publication number: 20140160151Abstract: Methods of compressing (and decompressing) bounding box data and a processor incorporating one or more of the methods. In one embodiment, a method of compressing such data includes: (1) generating dimension-specific multiplicands and a floating-shared scale multiplier from floating-point numbers representing extents of the bounding box and (2) substituting portions of floating-point numbers representing a reference point of the bounding box with the dimension-specific multiplicands to yield floating-point packed boundary box descriptors, the floating-point shared scale multiplier and the floating-point packed boundary box descriptors together constituting compressed bounding box data.Type: ApplicationFiled: December 6, 2012Publication date: June 12, 2014Applicant: NVIDIA CORPORATIONInventor: Andrei Pokrovsky