Patents by Inventor TILMAN WEKEL
TILMAN WEKEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11583258Abstract: The invention provides an ultrasound processing unit. A controller (18) of the unit is adapted to receive ultrasound data of an anatomical region, for example of the heart. The controller processes the ultrasound data over a period of time to monitor and detect whether alignment of a particular anatomical feature (34) represented in the data relative to a field of view (36) of the transducer unit is changing over time. In the event that the alignment is changing, the controller generates an output signal for communicating this to a user, allowing a user to be alerted at an early stage to likelihood of misalignment and loss of imaging or measurement capability.Type: GrantFiled: April 9, 2019Date of Patent: February 21, 2023Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Frank Michael Weber, Tilman Wekel, Balasundar Iyyavu Raju, Jonathan Thomas Sutton, Peter Bingley
-
Publication number: 20230049567Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: October 28, 2022Publication date: February 16, 2023Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20220415059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 25, 2022Publication date: December 29, 2022Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11531088Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: GrantFiled: March 31, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Patent number: 11532168Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20220277193Abstract: An annotation pipeline may be used to produce 2D and/or 3D ground truth data for deep neural networks, such as autonomous or semi-autonomous vehicle perception networks. Initially, sensor data may be captured with different types of sensors and synchronized to align frames of sensor data that represent a similar world state. The aligned frames may be sampled and packaged into a sequence of annotation scenes to be annotated. An annotation project may be decomposed into modular tasks and encoded into a labeling tool, which assigns tasks to labelers and arranges the order of inputs using a wizard that steps through the tasks. During the tasks, each type of sensor data in an annotation scene may be simultaneously presented, and information may be projected across sensor modalities to provide useful contextual information. After all annotation tasks have been completed, the resulting ground truth data may be exported in any suitable format.Type: ApplicationFiled: February 26, 2021Publication date: September 1, 2022Inventors: Tilman Wekel, Joachim Pehserl, Jacob Meyer, Jake Guza, Anton Mitrokhin, Richard Whitcomb, Marco Scoffier, David Nister, Grant Monroe
-
Publication number: 20210342608Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210342609Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210196228Abstract: An ultrasound imaging system is for determining stroke volume and/or cardiac output. The imaging system may include a transducer unit for acquiring ultrasound data of a heart of a subject (or an input for receiving the acquired ultrasound data), and a controller. The controller is adapted to implement a two-step procedure, the first step being an initial assessment step, and the second being an imaging step having two possible modes depending upon the outcome of the assessment. In the initial assessment procedure, it is determined whether regurgitant ventricular flow is present. This is performed using Doppler processing techniques applied to an initial ultrasound data set. If regurgitant flow does not exist, stroke volume is determined using segmentation of 3D ultrasound image data to identify and measure the volume of the left or right ventricle at each of end systole and end-diastole, the difference between them giving a measure of stroke volume.Type: ApplicationFiled: October 16, 2018Publication date: July 1, 2021Inventors: Balasundar Iyyavu RAJU, Peter BINGLEY, Frank Michael WEBER, Jonathan Thomas SUTTON, Tilman WEKEL, Arthur BOUWMAN, Erik KORSTEN
-
Publication number: 20210174936Abstract: The present invention relates to a device and method for sketch template generation or adaption. To provide an efficient link between the image data and the template sketch, the device comprises a model input (40) configured to obtain a patient-adapted anatomical model, a parameter extraction unit (41) configured to extract a set of model parameters from the patient-adapted model, and a sketch unit (42) configured to generate or adapt a sketch template based on the extracted set of model parameters.Type: ApplicationFiled: March 30, 2018Publication date: June 10, 2021Inventors: TILMAN WEKEL, ROLF JÜRGEN WEESE, HEIKE RUPPERTSHOFEN
-
Publication number: 20210156963Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210156960Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210150230Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: June 29, 2020Publication date: May 20, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210145413Abstract: The invention provides an ultrasound processing unit. A controller (18) of the unit is adapted to receive ultrasound data of an anatomical region, for example of the heart. The controller processes the ultrasound data over a period of time to monitor and detect whether alignment of a particular anatomical feature (34) represented in the data relative to a field of view (36) of the transducer unit is changing over time. In the event that the alignment is changing, the controller generates an output signal for communicating this to a user, allowing a user to be alerted at an early stage to likelihood of misalignment and loss of imaging or measurement capability.Type: ApplicationFiled: April 9, 2019Publication date: May 20, 2021Inventors: Frank Michael Weber, Tilman Wekel, Balasundar Iyyavu Raju, Jonathan Thomas Sutton, Peter Bingley
-
Publication number: 20210063578Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
-
Publication number: 20210026355Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.Type: ApplicationFiled: July 24, 2020Publication date: January 28, 2021Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
-
Publication number: 20190267142Abstract: The present invention relates to a system, a corresponding method and computer program for assessing outflow tract obstruction of a heart of a subject, the system comprising a unit (10) for providing a geometrical model of the heart, a unit (20) for providing a volumetric image of the heart, a unit (30) for adapting the geometrical model to the volumetric image to obtain an adapted model (32), a unit (40) for providing an implant model (44) of an implant and for constructing the implant model (44) into the adapted model (32) to obtain an enhanced model (42), a unit (60) for determining a trajectory curve (62) through an outflow tract (52), and a unit (50) for assessing outflow tract obstruction based on the adapted model (32), the enhanced model (42) and the trajectory curve (62). The invention allows for an improved assessing of an outflow tract obstruction of a heart of a subject.Type: ApplicationFiled: September 15, 2017Publication date: August 29, 2019Inventors: Tilman WEKEL, Thomas Heiko STEHLE, Rolf Jürgen WEESE
-
Publication number: 20190164075Abstract: An information retrieval system (IPS). The system comprises an input interface (IN) for receiving a query related to an object of interest. A concept mapper (CM) is configured to map the query to one or more associated concept entries of a hierarchic graph data structure (ONTO). The entries in said structure encode linguistic descriptors of components of a model (GM) for said object (OB). A metric-mapper (MM) is configured to map the query to one or more metric relationship descriptors. A geo-mapper (GEO) is configured to map said concept entries against the geometric model linked to the hierarchic graph data structure to obtain spatio-numerical data associated with said linguistic descriptors. A metric component (MTC) is configured to compute one or more metric or spatial relationships between said object components based on the spatio-numerical data and the one or more metric relationship descriptors.Type: ApplicationFiled: June 30, 2017Publication date: May 30, 2019Applicant: KONINKLIJKE PHILIPS N.V.Inventors: Rolf Jurgen Weese, Alexandra Groth, Tilman Wekel, Vincent Maurice Andre Auvray, Raoul Florent, Romane Isabelle Marie-Bernard Gauriau
-
Publication number: 20190139647Abstract: A system and method are provided for use in evaluating a clinical guideline which is represented in a machine readable version by a decision tree comprising at least one node and a decision rule associated with the node. The decision rule comprises at least one variable representing a biomedical quantity. The biomedical quantity is extracted from the patient data using an ontology which defines concepts and their relationships in a medical domain of the clinical guideline and which thereby relates the variable of the decision rule to the patient data. If said extraction is not possible, a view of the patient data is presented to the user to enable the user to determine the biomedical quantity from the view. Advantageously, the user is assisted in evaluating the clinical guideline even when it is not possible to automatically extract the biomedical quantity from the patient data.Type: ApplicationFiled: June 27, 2017Publication date: May 9, 2019Inventors: TILMAN WEKEL, ALEXANDRA GROTH, ROLF Jürgen WEESE