Patents by Inventor Urs Muller

Urs Muller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966838
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner
  • Publication number: 20240127062
    Abstract: In various examples, a machine learning model—such as a deep neural network (DNN)—may be trained to use image data and/or other sensor data as inputs to generate two-dimensional or three-dimensional trajectory points in world space, a vehicle orientation, and/or a vehicle state. For example, sensor data that represents orientation, steering information, and/or speed of a vehicle may be collected and used to automatically generate a trajectory for use as ground truth data for training the DNN. Once deployed, the trajectory points, the vehicle orientation, and/or the vehicle state may be used by a control component (e.g., a vehicle controller) for controlling the vehicle through a physical environment. For example, the control component may use these outputs of the DNN to determine a control profile (e.g., steering, decelerating, and/or accelerating) specific to the vehicle for controlling the vehicle through the physical environment.
    Type: Application
    Filed: December 8, 2023
    Publication date: April 18, 2024
    Inventors: Urs Muller, Mariusz Bojarski, Chenyi Chen, Bernhard Firner
  • Patent number: 11927502
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Publication number: 20240078794
    Abstract: A method is provided for validating annotations of objects. Spatial datapoints acquired by a sensor and annotation data are received. The annotation data is associated with the acquired spatial data points and includes an identification of each respective object. Via a processing unit, the annotations of the objects are validated by performing the steps of: determining a target range for at least one property of the objects, determining, from the acquired spatial datapoints and/or from the annotation data, a respective value of the at least one property for each respective object, and for each object, identifying the object as an erroneous object if the respective value of the at least one property is outside the target range for the at least one property. The erroneous object is selected for review regarding erroneous annotation.
    Type: Application
    Filed: September 18, 2023
    Publication date: March 7, 2024
    Applicant: Aptiv Technologies Limited
    Inventors: Urs ZIMMERMANN, Dennis MÜLLER
  • Publication number: 20230359213
    Abstract: In various examples, a trigger signal may be received that is indicative of a vehicle maneuver to be performed by a vehicle. A recommended vehicle trajectory for the vehicle maneuver may be determined in response to the trigger signal being received. To determine the recommended vehicle trajectory, sensor data may be received that represents a field of view of at least one sensor of the vehicle. A value of a control input and the sensor data may then be applied to a machine learning model(s) and the machine learning model(s) may compute output data that includes vehicle control data that represents the recommended vehicle trajectory for the vehicle through at least a portion of the vehicle maneuver. The vehicle control data may then be sent to a control component of the vehicle to cause the vehicle to be controlled according to the vehicle control data.
    Type: Application
    Filed: July 19, 2023
    Publication date: November 9, 2023
    Inventors: Chenyi Chen, Artem Provodin, Urs Muller
  • Patent number: 11755025
    Abstract: In various examples, a trigger signal may be received that is indicative of a vehicle maneuver to be performed by a vehicle. A recommended vehicle trajectory for the vehicle maneuver may be determined in response to the trigger signal being received. To determine the recommended vehicle trajectory, sensor data may be received that represents a field of view of at least one sensor of the vehicle. A value of a control input and the sensor data may then be applied to a machine learning model(s) and the machine learning model(s) may compute output data that includes vehicle control data that represents the recommended vehicle trajectory for the vehicle through at least a portion of the vehicle maneuver. The vehicle control data may then be sent to a control component of the vehicle to cause the vehicle to be controlled according to the vehicle control data.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: September 12, 2023
    Assignee: NVIDIA Corporation
    Inventors: Chenyi Chen, Artem Provodin, Urs Muller
  • Publication number: 20230168683
    Abstract: In various examples, a trigger signal may be received that is indicative of a vehicle maneuver to be performed by a vehicle. A recommended vehicle trajectory for the vehicle maneuver may be determined in response to the trigger signal being received. To determine the recommended vehicle trajectory, sensor data may be received that represents a field of view of at least one sensor of the vehicle. A value of a control input and the sensor data may then be applied to a machine learning model(s) and the machine learning model(s) may compute output data that includes vehicle control data that represents the recommended vehicle trajectory for the vehicle through at least a portion of the vehicle maneuver. The vehicle control data may then be sent to a control component of the vehicle to cause the vehicle to be controlled according to the vehicle control data.
    Type: Application
    Filed: January 11, 2023
    Publication date: June 1, 2023
    Inventors: Chenyi Chen, Artem Provodin, Urs Muller
  • Publication number: 20230110713
    Abstract: In various examples, a plurality of poses corresponding to one or more configuration parameters within an environment—such as a location of a machine within an environment, an orientation of a machine within an environment, a sensor angle pose of a machine, or a sensor location of a machine—may be used to generate training data and corresponding ground truth data for training a machine learning model—such as a deep neural network (DNN). As a result, the machine learning model, once deployed, may more accurately compute one or more outputs—such as outputs representative of lane boundaries, trajectories for an autonomous machine, etc.—agnostic to machine and/or sensor poses of the machine within which the machine learning model is deployed.
    Type: Application
    Filed: October 8, 2021
    Publication date: April 13, 2023
    Inventors: Alperen Degirmenci, Won Hong, Mariusz Bojarski, Jesper Eduard van Engelen, Bernhard Firner, Zongyi Yang, Urs Muller
  • Patent number: 11609572
    Abstract: In various examples, a trigger signal may be received that is indicative of a vehicle maneuver to be performed by a vehicle. A recommended vehicle trajectory for the vehicle maneuver may be determined in response to the trigger signal being received. To determine the recommended vehicle trajectory, sensor data may be received that represents a field of view of at least one sensor of the vehicle. A value of a control input and the sensor data may then be applied to a machine learning model(s) and the machine learning model(s) may compute output data that includes vehicle control data that represents the recommended vehicle trajectory for the vehicle through at least a portion of the vehicle maneuver. The vehicle control data may then be sent to a control component of the vehicle to cause the vehicle to be controlled according to the vehicle control data.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: March 21, 2023
    Assignee: NVIDIA Corporation
    Inventors: Chenyi Chen, Artem Provodin, Urs Muller
  • Publication number: 20220244727
    Abstract: In various examples, rapid resolution of deep neural network (DNN) failure modes may be achieved by deploying patch neural networks (PNNs) trained to operate effectively on the failure modes of the DNN. The PNNs may operate on the same or additional data as the DNN, and may generate new signals in addition to those generated using the DNN that address the failure modes of the DNN. A fusion mechanism may be employed to determine which output to rely on for a given instance of the DNN/PNN combination. As a result, failure modes of the DNN may be addressed in a timely manner that requires minimal deactivation or downtime for the DNN, a feature controlled using the DNN, and/or semi-autonomous or autonomous functionality as a whole.
    Type: Application
    Filed: February 1, 2021
    Publication date: August 4, 2022
    Inventors: Mariusz Bojarski, Urs Muller, Beat Flepp, Carmen Adriana Maxim, Marco Scoffier
  • Publication number: 20220092317
    Abstract: In various examples, sensor data used to train an MLM and/or used by the MLM during deployment, may be captured by sensors having different perspectives (e.g., fields of view). The sensor data may be transformed—to generate transformed sensor data—such as by altering or removing lens distortions, shifting, and/or rotating images corresponding to the sensor data to a field of view of a different physical or virtual sensor. As such, the MLM may be trained and/or deployed using sensor data captured from a same or similar field of view. As a result, the MLM may be trained and/or deployed—across any number of different vehicles with cameras and/or other sensors having different perspectives—using sensor data that is of the same perspective as the reference or ideal sensor.
    Type: Application
    Filed: September 21, 2021
    Publication date: March 24, 2022
    Inventors: Zongyi Yang, Mariusz Bojarski, Bernhard Firner, Urs Muller
  • Publication number: 20210406679
    Abstract: In examples, image data representative of an image of a field of view of at least one sensor may be received. Source areas may be defined that correspond to a region of the image. Areas and/or dimensions of at least some of the source areas may decrease along at least one direction relative to a perspective of the at least one sensor. A downsampled version of the region (e.g., a downsampled image or feature map of a neural network) may be generated from the source areas based at least in part on mapping the source areas to cells of the downsampled version of the region. Resolutions of the region that are captured by the cells may correspond to the areas of the source areas, such that certain portions of the region (e.g., portions at a far distance from the sensor) retain higher resolution than others.
    Type: Application
    Filed: June 30, 2020
    Publication date: December 30, 2021
    Inventors: Haiguang Wen, Bernhard Firner, Mariusz Bojarski, Zongyi Yang, Urs Muller
  • Publication number: 20210295171
    Abstract: In various examples, past location information corresponding to actors in an environment and map information may be applied to a deep neural network (DNN)—such as a recurrent neural network (RNN)—trained to compute information corresponding to future trajectories of the actors. The output of the DNN may include, for each future time slice the DNN is trained to predict, a confidence map representing a confidence for each pixel that an actor is present and a vector field representing locations of actors in confidence maps for prior time slices. The vector fields may thus be used to track an object through confidence maps for each future time slice to generate a predicted future trajectory for each actor. The predicted future trajectories, in addition to tracked past trajectories, may be used to generate full trajectories for the actors that may aid an ego-vehicle in navigating the environment.
    Type: Application
    Filed: March 19, 2020
    Publication date: September 23, 2021
    Inventors: Alexey Kamenev, Nikolai Smolyanskiy, Ishwar Kulkarni, Ollin Boer Bohan, Fangkai Yang, Alperen Degirmenci, Ruchi Bhargava, Urs Muller, David Nister, Rotem Aviv
  • Publication number: 20210271254
    Abstract: In various examples, a trigger signal may be received that is indicative of a vehicle maneuver to be performed by a vehicle. A recommended vehicle trajectory for the vehicle maneuver may be determined in response to the trigger signal being received. To determine the recommended vehicle trajectory, sensor data may be received that represents a field of view of at least one sensor of the vehicle. A value of a control input and the sensor data may then be applied to a machine learning model(s) and the machine learning model(s) may compute output data that includes vehicle control data that represents the recommended vehicle trajectory for the vehicle through at least a portion of the vehicle maneuver. The vehicle control data may then be sent to a control component of the vehicle to cause the vehicle to be controlled according to the vehicle control data.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Chenyi Chen, Artem Provodin, Urs Muller
  • Patent number: 11042163
    Abstract: In various examples, a trigger signal may be received that is indicative of a vehicle maneuver to be performed by a vehicle. A recommended vehicle trajectory for the vehicle maneuver may be determined in response to the trigger signal being received. To determine the recommended vehicle trajectory, sensor data may be received that represents a field of view of at least one sensor of the vehicle. A value of a control input and the sensor data may then be applied to a machine learning model(s) and the machine learning model(s) may compute output data that includes vehicle control data that represents the recommended vehicle trajectory for the vehicle through at least a portion of the vehicle maneuver. The vehicle control data may then be sent to a control component of the vehicle to cause the vehicle to be controlled according to the vehicle control data.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: June 22, 2021
    Assignee: NVIDIA Corporation
    Inventors: Chenyi Chen, Artem Provodin, Urs Muller
  • Patent number: 11014764
    Abstract: The invention relates to an apparatus for aligning box-shaped articles of various sizes on a conveyor belt, as well as a printing station, a reading station, and a labelling station including said apparatus. The apparatus comprises an input conveyor belt operable to move transversally with respect to an output conveyor belt conveying direction, to adapt to a transverse size of a transported box-shaped article, and feed said output conveyor belt with corresponding aligned article.
    Type: Grant
    Filed: November 21, 2018
    Date of Patent: May 25, 2021
    Assignee: SICPA HOLDING SA
    Inventors: Urs Müller, Tobias Scherer
  • Publication number: 20200354157
    Abstract: The invention relates to an apparatus for aligning box-shaped articles of various sizes on a conveyor belt, as well as a printing station, a reading station, and a labelling station including said apparatus. The apparatus comprises an input conveyor belt operable to move transversally with respect to an output conveyor belt conveying direction, to adapt to a transverse size of a transported box-shaped article, and feed said output conveyor belt with corresponding aligned article.
    Type: Application
    Filed: November 21, 2018
    Publication date: November 12, 2020
    Inventors: Urs MÜLLER, Tobias SCHERER
  • Publication number: 20200339109
    Abstract: In various examples, sensor data recorded in the real-world may be leveraged to generate transformed, additional, sensor data to test one or more functions of a vehicle—such as a function of an AEB, CMW, LDW, ALC, or ACC system. Sensor data recorded by the sensors may be augmented, transformed, or otherwise updated to represent sensor data corresponding to state information defined by a simulation test profile for testing the vehicle function(s). Once a set of test data has been generated, the test data may be processed by a system of the vehicle to determine the efficacy of the system with respect to any number of test criteria. As a result, a test set including additional or alternative instances of sensor data may be generated from real-world recorded sensor data to test a vehicle in a variety of test scenarios—including those that may be too dangerous to test in the real-world.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 29, 2020
    Inventors: Jesse Hong, Urs Muller, Bernhard Firner, Zongyi Yang, Joyjit Daw, David Nister, Roberto Giuseppe Luca Valenti, Rotem Aviv
  • Publication number: 20200324795
    Abstract: In various examples, training sensor data generated by one or more sensors of autonomous machines may be localized to high definition (HD) map data to augment and/or generate ground truth data—e.g., automatically, in embodiments. The ground truth data may be associated with the training sensor data for training one or more deep neural networks (DNNs) to compute outputs corresponding to autonomous machine operations—such as object or feature detection, road feature detection and classification, wait condition identification and classification, etc. As a result, the HD map data may be leveraged during training such that the DNNs—in deployment—may aid autonomous machines in navigating environments safely without relying on HD map data to do so.
    Type: Application
    Filed: April 3, 2020
    Publication date: October 15, 2020
    Inventors: Mariusz Bojarski, Urs Muller, Bernhard Firner, Amir Akbarzadeh
  • Publication number: 20200124704
    Abstract: A receiver according to some embodiments of a pulsed UWB receiver. The UWB receiver includes a down-conversion mixer coupled to the first switch and providing an IF signal; an integrator that receives the IF signal, the integrator being switched on and off at a periodic repetition frequency (PRF); a sample and hold circuit coupled to the integrator; a first phase switch that is switched at PRF/2 coupled to receive a signal before the integrator; a second phase switch coupled to the sample and hold circuit, the second switch also being switched at PRF/2; and a low-pass filter coupled to the second switch, wherein l/f noise is filtered out by the low-pass filter.
    Type: Application
    Filed: October 16, 2019
    Publication date: April 23, 2020
    Inventors: Jonathan GOWING, Urs MÜLLER, James LITTLE