Patents by Inventor David Weikersdorfer
David Weikersdorfer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250139827Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: ApplicationFiled: December 30, 2024Publication date: May 1, 2025Inventors: Sravya Nimmagadda, David Weikersdorfer
-
Patent number: 12183037Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: GrantFiled: October 16, 2023Date of Patent: December 31, 2024Assignee: NVIDIA CorporationInventors: Sravya Nimmagadda, David Weikersdorfer
-
Publication number: 20240176018Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective.Type: ApplicationFiled: November 30, 2022Publication date: May 30, 2024Inventors: David Weikersdorfer, Qian Lin, Aman Jhunjhunwala, Emilie Lucie Eloïse Wirbel, Sangmin Oh, Minwoo Park, Gyeong Woo Cheon, Arthur Henry Rajala, Bor-Jeng Chen
-
Publication number: 20240176017Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations.Type: ApplicationFiled: November 30, 2022Publication date: May 30, 2024Inventors: David Weikersdorfer, Qian Lin, Aman Jhunjhunwala, Emilie Lucie Eloïse Wirbel, Sangmin Oh, Minwoo Park, Gyeong Woo Cheon, Arthur Henry Rajala, Bor-Jeng Chen
-
Publication number: 20240037788Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Sravya Nimmagadda, David Weikersdorfer
-
Patent number: 11823415Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: GrantFiled: March 3, 2021Date of Patent: November 21, 2023Assignee: NVIDIA CorporationInventors: Sravya Nimmagadda, David Weikersdorfer
-
Publication number: 20220284624Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.Type: ApplicationFiled: March 3, 2021Publication date: September 8, 2022Inventors: Sravya Nimmagadda, David Weikersdorfer
-
Patent number: 10657391Abstract: The present disclosure provides systems and methods for image-based free space detection. In one example embodiment, a computer-implemented method includes obtaining image data representing the environment proximate to the autonomous vehicle, the image data including a representation of the environment from a perspective associated with the autonomous vehicle. The method includes reprojecting the image data to generate a reprojected image. The method includes inputting the reprojected image to a machine-learned detector model. The method includes obtaining as an output of the machine-learned detector model, object data characterizing one or more objects in the environment. The method includes determining a free space in the environment based at least in part on the object data.Type: GrantFiled: February 1, 2018Date of Patent: May 19, 2020Assignee: UATC, LLCInventors: Kuan-Chieh Chen, David Weikersdorfer
-
Publication number: 20190213426Abstract: The present disclosure provides systems and methods for image-based free space detection. In one example embodiment, a computer-implemented method includes obtaining image data representing the environment proximate to the autonomous vehicle, the image data including a representation of the environment from a perspective associated with the autonomous vehicle. The method includes reprojecting the image data to generate a reprojected image. The method includes inputting the reprojected image to a machine-learned detector model. The method includes obtaining as an output of the machine-learned detector model, object data characterizing one or more objects in the environment. The method includes determining a free space in the environment based at least in part on the object data.Type: ApplicationFiled: February 1, 2018Publication date: July 11, 2019Inventors: Kuan-Chieh Chen, David Weikersdorfer
-
Patent number: D958860Type: GrantFiled: December 31, 2019Date of Patent: July 26, 2022Assignee: NVIDIA CORPORATIONInventors: Claire Delaunay, Kenneth William MacLean, Gabriele Pasqualino, David Weikersdorfer, Gregor Markus Kopka