Patents by Inventor TILMAN WEKEL

TILMAN WEKEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960026
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240111025
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: December 6, 2023
    Publication date: April 4, 2024
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Publication number: 20240096102
    Abstract: Systems and methods are disclosed that relate to freespace detection using machine learning models. First data that may include object labels may be obtained from a first sensor and freespace may be identified using the first data and the object labels. The first data may be annotated to include freespace labels that correspond to freespace within an operational environment. Freespace annotated data may be generated by combining the one or more freespace labels with second data obtained from a second sensor, with the freespace annotated data corresponding to a viewable area in the operational environment. The viewable area may be determined by tracing one or more rays from the second sensor within the field of view of the second sensor relative to the first data. The freespace annotated data may be input into a machine learning model to train the machine learning model to detect freespace using the second data.
    Type: Application
    Filed: August 7, 2023
    Publication date: March 21, 2024
    Inventors: Alexander POPOV, David NISTER, Nikolai SMOLYANSKIY, PATRIK GEBHARDT, Ke CHEN, Ryan OLDJA, Hee Seok LEE, Shane MURRAY, Ruchi BHARGAVA, Tilman WEKEL, Sangmin OH
  • Patent number: 11915493
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: February 27, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20240061075
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 24, 2023
    Publication date: February 22, 2024
    Inventors: Alexander POPOV, Nikolai SMOLYANSKIY, Ryan OLDJA, Shane Murray, Tilman WEKEL, David NISTER, Joachim PEHSERL, Ruchi BHARGAVA, Sangmin OH
  • Patent number: 11906660
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: February 20, 2024
    Assignee: NVIDIA Corporation
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Patent number: 11885907
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: January 30, 2024
    Assignee: NVIDIA Corporation
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20240029447
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: October 6, 2023
    Publication date: January 25, 2024
    Inventors: Nikolai SMOLYANSKIY, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11769599
    Abstract: A system and method are provided for use in evaluating a clinical guideline which is represented in a machine readable version by a decision tree comprising at least one node and a decision rule associated with the node. The decision rule comprises at least one variable representing a biomedical quantity. The biomedical quantity is extracted from the patient data using an ontology which defines concepts and their relationships in a medical domain of the clinical guideline and which thereby relates the variable of the decision rule to the patient data. If said extraction is not possible, a view of the patient data is presented to the user to enable the user to determine the biomedical quantity from the view. Advantageously, the user is assisted in evaluating the clinical guideline even when it is not possible to automatically extract the biomedical quantity from the patient data.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: September 26, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Tilman Wekel, Alexandra Groth, Rolf Jürgen Weese
  • Patent number: 11617561
    Abstract: An ultrasound imaging system is for determining stroke volume and/or cardiac output. The imaging system may include a transducer unit for acquiring ultrasound data of a heart of a subject (or an input for receiving the acquired ultrasound data), and a controller. The controller is adapted to implement a two-step procedure, the first step being an initial assessment step, and the second being an imaging step having two possible modes depending upon the outcome of the assessment. In the initial assessment procedure, it is determined whether regurgitant ventricular flow is present. This is performed using Doppler processing techniques applied to an initial ultrasound data set. If regurgitant flow does not exist, stroke volume is determined using segmentation of 3D ultrasound image data to identify and measure the volume of the left or right ventricle at each of end systole and end-diastole, the difference between them giving a measure of stroke volume.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: April 4, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Balasundar Iyyavu Raju, Peter Bingley, Frank Michael Weber, Jonathan Thomas Sutton, Tilman Wekel, Arthur Bouwman, Erik Korsten
  • Patent number: 11593691
    Abstract: An information retrieval system (IPS). The system comprises an input interface (IN) for receiving a query related to an object of interest. A concept mapper (CM) is configured to map the query to one or more associated concept entries of a hierarchic graph data structure (ONTO). The entries in said structure encode linguistic descriptors of components of a model (GM) for said object (OB). A metric-mapper (MM) is configured to map the query to one or more metric relationship descriptors. A geo-mapper (GEO) is configured to map said concept entries against the geometric model linked to the hierarchic graph data structure to obtain spatio-numerical data associated with said linguistic descriptors. A metric component (MTC) is configured to compute one or more metric or spatial relationships between said object components based on the spatio-numerical data and the one or more metric relationship descriptors.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: February 28, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Rolf Jurgen Weese, Alexandra Groth, Tilman Wekel, Vincent Maurice Andre Auvray, Raoul Florent, Romane Isabelle Marie-Bernard Gauriau
  • Patent number: 11583258
    Abstract: The invention provides an ultrasound processing unit. A controller (18) of the unit is adapted to receive ultrasound data of an anatomical region, for example of the heart. The controller processes the ultrasound data over a period of time to monitor and detect whether alignment of a particular anatomical feature (34) represented in the data relative to a field of view (36) of the transducer unit is changing over time. In the event that the alignment is changing, the controller generates an output signal for communicating this to a user, allowing a user to be alerted at an early stage to likelihood of misalignment and loss of imaging or measurement capability.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: February 21, 2023
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Frank Michael Weber, Tilman Wekel, Balasundar Iyyavu Raju, Jonathan Thomas Sutton, Peter Bingley
  • Publication number: 20230049567
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: October 28, 2022
    Publication date: February 16, 2023
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20220415059
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11532168
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11531088
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20220277193
    Abstract: An annotation pipeline may be used to produce 2D and/or 3D ground truth data for deep neural networks, such as autonomous or semi-autonomous vehicle perception networks. Initially, sensor data may be captured with different types of sensors and synchronized to align frames of sensor data that represent a similar world state. The aligned frames may be sampled and packaged into a sequence of annotation scenes to be annotated. An annotation project may be decomposed into modular tasks and encoded into a labeling tool, which assigns tasks to labelers and arranges the order of inputs using a wizard that steps through the tasks. During the tasks, each type of sensor data in an annotation scene may be simultaneously presented, and information may be projected across sensor modalities to provide useful contextual information. After all annotation tasks have been completed, the resulting ground truth data may be exported in any suitable format.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Tilman Wekel, Joachim Pehserl, Jacob Meyer, Jake Guza, Anton Mitrokhin, Richard Whitcomb, Marco Scoffier, David Nister, Grant Monroe
  • Publication number: 20210342609
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210342608
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210196228
    Abstract: An ultrasound imaging system is for determining stroke volume and/or cardiac output. The imaging system may include a transducer unit for acquiring ultrasound data of a heart of a subject (or an input for receiving the acquired ultrasound data), and a controller. The controller is adapted to implement a two-step procedure, the first step being an initial assessment step, and the second being an imaging step having two possible modes depending upon the outcome of the assessment. In the initial assessment procedure, it is determined whether regurgitant ventricular flow is present. This is performed using Doppler processing techniques applied to an initial ultrasound data set. If regurgitant flow does not exist, stroke volume is determined using segmentation of 3D ultrasound image data to identify and measure the volume of the left or right ventricle at each of end systole and end-diastole, the difference between them giving a measure of stroke volume.
    Type: Application
    Filed: October 16, 2018
    Publication date: July 1, 2021
    Inventors: Balasundar Iyyavu RAJU, Peter BINGLEY, Frank Michael WEBER, Jonathan Thomas SUTTON, Tilman WEKEL, Arthur BOUWMAN, Erik KORSTEN