Patents by Inventor Bernhard Firner

Bernhard Firner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240183752
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios.
    Type: Application
    Filed: February 15, 2024
    Publication date: June 6, 2024
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Patent number: 11966838
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner
  • Publication number: 20240127062
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Application
    Filed: December 8, 2023
    Publication date: April 18, 2024
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner
  • Patent number: 11927502
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Patent number: 11841458
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: December 12, 2023
    Assignee: NVIDIA Corporation
    Inventor: Bernhard Firner
  • Publication number: 20230298361
    Abstract: In various examples, image space coordinates of an image from a video may be labeled, projected to determine 3D vehicle space coordinates, then transformed to 3D world space coordinates using known 3D world space coordinates and relative positioning between the coordinate spaces. For example, 3D vehicle space coordinates may be temporally correlated with known 3D world space coordinates measured while capturing the video. The known 3D world space coordinates and known relative positioning between the coordinate spaces may be used to offset or otherwise define a transform for the 3D vehicle space coordinates to world space. Resultant 3D world space coordinates may be used for one or more labeled frames to generate ground truth data. For example, 3D world space coordinates for left and right lane lines from multiple frames may be used to define lane lines for any given frame.
    Type: Application
    Filed: March 16, 2022
    Publication date: September 21, 2023
    Inventors: Zongyi Yang, Mariusz Bojarski, Bernhard Firner
  • Publication number: 20230110713
    Abstract: In various examples, a plurality of poses corresponding to one or more configuration parameters within an environment—such as a location of a machine within an environment, an orientation of a machine within an environment, a sensor angle pose of a machine, or a sensor location of a machine—may be used to generate training data and corresponding ground truth data for training a machine learning model—such as a deep neural network (DNN). As a result, the machine learning model, once deployed, may more accurately compute one or more outputs—such as outputs representative of lane boundaries, trajectories for an autonomous machine, etc.—agnostic to machine and/or sensor poses of the machine within which the machine learning model is deployed.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 13, 2023
    Inventors: Alperen Degirmenci, Won Hong, Mariusz Bojarski, Jesper Eduard van Engelen, Bernhard Firner, Zongyi Yang, Urs Muller
  • Publication number: 20230017261
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Application
    Filed: September 19, 2022
    Publication date: January 19, 2023
    Inventor: Bernhard Firner
  • Patent number: 11449709
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: September 20, 2022
    Assignee: NVIDIA Corporation
    Inventor: Bernhard Firner
  • Publication number: 20220092317
    Abstract: In various examples, sensor data used to train an MLM and/or used by the MLM during deployment, may be captured by sensors having different perspectives (e.g., fields of view). The sensor data may be transformed—to generate transformed sensor data—such as by altering or removing lens distortions, shifting, and/or rotating images corresponding to the sensor data to a field of view of a different physical or virtual sensor. As such, the MLM may be trained and/or deployed using sensor data captured from a same or similar field of view. As a result, the MLM may be trained and/or deployed—across any number of different vehicles with cameras and/or other sensors having different perspectives—using sensor data that is of the same perspective as the reference or ideal sensor.
    Type: Application
    Filed: September 21, 2021
    Publication date: March 24, 2022
    Inventors: Zongyi Yang, Mariusz Bojarski, Bernhard Firner, Urs Muller
  • Publication number: 20210406679
    Abstract: In examples, image data representative of an image of a field of view of at least one sensor may be received. Source areas may be defined that correspond to a region of the image. Areas and/or dimensions of at least some of the source areas may decrease along at least one direction relative to a perspective of the at least one sensor. A downsampled version of the region (e.g., a downsampled image or feature map of a neural network) may be generated from the source areas based at least in part on mapping the source areas to cells of the downsampled version of the region. Resolutions of the region that are captured by the cells may correspond to the areas of the source areas, such that certain portions of the region (e.g., portions at a far distance from the sensor) retain higher resolution than others.
    Type: Application
    Filed: June 30, 2020
    Publication date: December 30, 2021
    Inventors: Haiguang Wen, Bernhard Firner, Mariusz Bojarski, Zongyi Yang, Urs Muller
  • Publication number: 20210042575
    Abstract: A neural network is trained to focus on a domain of interest. For example, in a pre-training phase, the neural network in trained using synthetic training data, which is configured to omit or limit content less relevant to the domain of interest, by updating parameters of the neural network to improve the accuracy of predictions. In a subsequent training phase, the pre-trained neural network is trained using real-world training data by updating only a first subset of the parameters associated with feature extraction, while a second subset of the parameters more associated with policies remains fixed.
    Type: Application
    Filed: May 14, 2020
    Publication date: February 11, 2021
    Inventor: Bernhard Firner
  • Publication number: 20200339109
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 29, 2020
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Publication number: 20200324795
    Abstract: In various examples, training sensor data generated by one or more sensors of autonomous machines may be localized to high definition (HD) map data to augment and/or generate ground truth data—e.g., automatically, in embodiments. The ground truth data may be associated with the training sensor data for training one or more deep neural networks (DNNs) to compute outputs corresponding to autonomous machine operations—such as object or feature detection, road feature detection and classification, wait condition identification and classification, etc. As a result, the HD map data may be leveraged during training such that the DNNs—in deployment—may aid autonomous machines in navigating environments safely without relying on HD map data to do so.
    Type: Application
    Filed: April 3, 2020
    Publication date: October 15, 2020
    Inventors: Mariusz Bojarski, Urs Muller, Bernhard Firner, Amir Akbarzadeh
  • Publication number: 20190384303
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Application
    Filed: May 10, 2019
    Publication date: December 19, 2019
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner