Patents by Inventor David Weikersdorfer

David Weikersdorfer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240176017
    Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective. The machine may use these outputs to perform one or more operations.
    Type: Application
    Filed: November 30, 2022
    Publication date: May 30, 2024
    Inventors: David Weikersdorfer, Qian Lin, Aman Jhunjhunwala, Emilie Lucie Eloïse Wirbel, Sangmin Oh, Minwoo Park, Gyeong Woo Cheon, Arthur Henry Rajala, Bor-Jeng Chen
  • Publication number: 20240176018
    Abstract: In various examples, techniques for sensor-fusion based object detection and/or free-space detection using ultrasonic sensors are described. Systems may receive sensor data generated using one or more types of sensors of a machine. In some examples, the systems may then process at least a portion of the sensor data to generate input data, where the input data represents one or more locations of one or more objects within an environment. The systems may then input at least a portion of the sensor data and/or at least a portion of the input data into one or more neural networks that are trained to output one or more maps or other output representations associated with the environment. In some examples, the map(s) may include a height, an occupancy, and/or height/occupancy map generated, e.g., from a birds-eye-view perspective.
    Type: Application
    Filed: November 30, 2022
    Publication date: May 30, 2024
    Inventors: David Weikersdorfer, Qian Lin, Aman Jhunjhunwala, Emilie Lucie Eloïse Wirbel, Sangmin Oh, Minwoo Park, Gyeong Woo Cheon, Arthur Henry Rajala, Bor-Jeng Chen
  • Publication number: 20240037788
    Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 1, 2024
    Inventors: Sravya Nimmagadda, David Weikersdorfer
  • Patent number: 11823415
    Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: November 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Sravya Nimmagadda, David Weikersdorfer
  • Publication number: 20220284624
    Abstract: An autoencoder may be trained to predict 3D pose labels using simulation data extracted from a simulated environment, which may be configured to represent an environment in which the 3D pose estimator is to be deployed. Assets may be used to mimic the deployment environment such as 3D models or textures and parameters used to define deployment scenarios and/or conditions that the 3D pose estimator will operate under in the environment. The autoencoder may be trained to predict a segmentation image from an input image that is invariant to occlusions. Further, the autoencoder may be trained to exclude areas of the input image from the object that correspond to one or more appendages of the object. The 3D pose may be adapted to unlabeled real-world data using a GAN, which predicts whether output of the 3D pose estimator was generated from real-world data or simulated data.
    Type: Application
    Filed: March 3, 2021
    Publication date: September 8, 2022
    Inventors: Sravya Nimmagadda, David Weikersdorfer
  • Patent number: 10657391
    Abstract: The present disclosure provides systems and methods for image-based free space detection. In one example embodiment, a computer-implemented method includes obtaining image data representing the environment proximate to the autonomous vehicle, the image data including a representation of the environment from a perspective associated with the autonomous vehicle. The method includes reprojecting the image data to generate a reprojected image. The method includes inputting the reprojected image to a machine-learned detector model. The method includes obtaining as an output of the machine-learned detector model, object data characterizing one or more objects in the environment. The method includes determining a free space in the environment based at least in part on the object data.
    Type: Grant
    Filed: February 1, 2018
    Date of Patent: May 19, 2020
    Assignee: UATC, LLC
    Inventors: Kuan-Chieh Chen, David Weikersdorfer
  • Publication number: 20190213426
    Abstract: The present disclosure provides systems and methods for image-based free space detection. In one example embodiment, a computer-implemented method includes obtaining image data representing the environment proximate to the autonomous vehicle, the image data including a representation of the environment from a perspective associated with the autonomous vehicle. The method includes reprojecting the image data to generate a reprojected image. The method includes inputting the reprojected image to a machine-learned detector model. The method includes obtaining as an output of the machine-learned detector model, object data characterizing one or more objects in the environment. The method includes determining a free space in the environment based at least in part on the object data.
    Type: Application
    Filed: February 1, 2018
    Publication date: July 11, 2019
    Inventors: Kuan-Chieh Chen, David Weikersdorfer
  • Patent number: D958860
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: July 26, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Claire Delaunay, Kenneth William MacLean, Gabriele Pasqualino, David Weikersdorfer, Gregor Markus Kopka