Patents by Inventor Christian Nunn
Christian Nunn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12188780Abstract: A computer implemented method for determining a location of an object comprises the following steps carried out by computer hardware components: determining a pre-stored map of a vicinity of the object; acquiring sensor data related to the vicinity of the object; determining an actual map based on the acquired sensor data; carrying out image registration based on the pre-stored map and the actual map; carrying out image registration based on the image retrieval; and determining a location of the object based on the image registration.Type: GrantFiled: January 6, 2022Date of Patent: January 7, 2025Assignee: Aptiv Technologies AGInventors: Mirko Meuter, Christian Nunn, Weimeng Zhu, Florian Kaestner, Adrian Becker, Markus Schoeler
-
Patent number: 11804026Abstract: A device for processing data sequences by means of a convolutional neural network is configured to carry out the following steps: receiving an input sequence comprising a plurality of data items captured over time using a sensor, each of said data items comprising a multi-dimensional representation of a scene, generating an output sequence representing the input sequence processed item-wise by the convolutional neural network, wherein generating the output sequence comprises: generating a grid-generation sequence based on a combination of the input sequence and an intermediate grid-generation sequence representing a past portion of the output sequence or the grid-generation sequence, generating a sampling grid on the basis of the grid-generation sequence, generating an intermediate output sequence by sampling from the past portion of the output sequence according to the sampling grid, and generating the output sequence based on a weighted combination of the intermediate output sequence and the input sequence.Type: GrantFiled: October 19, 2022Date of Patent: October 31, 2023Assignee: Aptiv Technologies LimitedInventors: Weimeng Zhu, Yu Su, Christian Nunn
-
Publication number: 20230236030Abstract: Provided is a computer-implemented method of determining a point of interest and/or a road type in a map, comprising the steps of: acquiring processed sensor data collected from one or more vehicles; extracting from the processed sensor data a set of classification parameters; and determining based on the set of classification parameters one or more points of interest (POI) and its geographic location and/or one or more road types.Type: ApplicationFiled: January 20, 2023Publication date: July 27, 2023Inventors: Christian Nunn, Dennis Mueller, Lutz Roese-Koerner, Pascal Colling
-
Publication number: 20230120299Abstract: This disclosure describes systems and techniques for processing radar sensor data. The systems and techniques include acquiring radar sensor data from a radar sensor and processing the radar sensor data by, for example, an artificial neural network to obtain at least one of range radar data or Doppler radar data.Type: ApplicationFiled: October 17, 2022Publication date: April 20, 2023Inventor: Christian Nunn
-
Publication number: 20230104196Abstract: A device for processing data sequences by means of a convolutional neural network is configured to carry out the following steps: receiving an input sequence comprising a plurality of data items captured over time using a sensor, each of said data items comprising a multi-dimensional representation of a scene, generating an output sequence representing the input sequence processed item-wise by the convolutional neural network, wherein generating the output sequence comprises: generating a grid-generation sequence based on a combination of the input sequence and an intermediate grid-generation sequence representing a past portion of the output sequence or the grid-generation sequence, generating a sampling grid on the basis of the grid-generation sequence, generating an intermediate output sequence by sampling from the past portion of the output sequence according to the sampling grid, and generating the output sequence based on a weighted combination of the intermediate output sequence and the input sequence.Type: ApplicationFiled: October 19, 2022Publication date: April 6, 2023Inventors: Weimeng Zhu, Yu Su, Christian Nunn
-
Publication number: 20230037900Abstract: The present disclosure is directed at systems and methods for determining objects around a vehicle. In aspects, a system includes a sensor unit having at least one radar sensor arranged and configured to obtain radar image data of external surroundings to determine objects around a vehicle. The system further includes a processing unit adapted to process the radar image data to generate a top view image of the external surroundings of the vehicle. The top view image is configured to be displayed on a display unit and useful to indicate a relative position of the vehicle with respect to determined objects.Type: ApplicationFiled: August 4, 2022Publication date: February 9, 2023Inventors: Mirko Meuter, Christian Nunn, Jan Siegemund, Jittu Kurian, Alessandro Cennamo, Marco Braun, Dominic Spata
-
Patent number: 11521059Abstract: A device for processing data sequences by means of a convolutional neural network is configured to carry out the following steps: receiving an input sequence comprising a plurality of data items captured over time using a sensor, each of said data items comprising a multi-dimensional representation of a scene, generating an output sequence representing the input sequence processed item-wise by the convolutional neural network, wherein generating the output sequence comprises: generating a grid-generation sequence based on a combination of the input sequence and an intermediate grid-generation sequence representing a past portion of the output sequence or the grid-generation sequence, generating a sampling grid on the basis of the grid-generation sequence, generating an intermediate output sequence by sampling from the past portion of the output sequence according to the sampling grid, and generating the output sequence based on a weighted combination of the intermediate output sequence and the input sequence.Type: GrantFiled: April 3, 2019Date of Patent: December 6, 2022Assignee: Aptiv Technologies LimitedInventors: Weimeng Zhu, Yu Su, Christian Nunn
-
Publication number: 20220383146Abstract: A method is provided for training a machine-learning algorithm which relies on primary data captured by at least one primary sensor. Labels are identified based on auxiliary data provided by at least one auxiliary sensor. A care attribute or a no-care attribute is assigned to each label by determining a perception capability of the primary sensor for the label based on the primary data and based on the auxiliary data. Model predictions for the labels are generated via the machine-learning algorithm. A loss function is defined for the model predictions. Negative contributions to the loss function are permitted for all labels. Positive contributions to the loss function are permitted for labels having a care attribute, while positive contributions to the loss function for labels having a no-care attribute are permitted only if a confidence of the model prediction for the respective label is greater than a threshold.Type: ApplicationFiled: May 31, 2022Publication date: December 1, 2022Inventors: Markus Schoeler, Jan Siegemund, Christian Nunn, Yu Su, Mirko Meuter, Adrian Becker, Peet Cremer
-
Publication number: 20220221303Abstract: A computer implemented method for determining a location of an object comprises the following steps carried out by computer hardware components: determining a pre-stored map of a vicinity of the object; acquiring sensor data related to the vicinity of the object; determining an actual map based on the acquired sensor data; carrying out image registration based on the pre-stored map and the actual map; carrying out image registration based on the image retrieval; and determining a location of the object based on the image registration.Type: ApplicationFiled: January 6, 2022Publication date: July 14, 2022Inventors: Mirko Meuter, Christian Nunn, Weimeng Zhu, Florian Kaestner, Adrian Becker, Markus Schoeler
-
Patent number: 11195038Abstract: A device for extracting dynamic information comprises a convolutional neural network, wherein the device is configured to receive a sequence of data blocks acquired over time, each of said data blocks comprising a multi-dimensional representation of a scene. The convolutional neural network is configured to receive the sequence as input and to output dynamic information on the scene in response, wherein the convolutional neural network comprises a plurality of modules, and wherein each of said modules is configured to carry out a specific processing task for extracting the dynamic information.Type: GrantFiled: April 3, 2019Date of Patent: December 7, 2021Assignee: Aptiv Technologies LimitedInventors: Christian Nunn, Weimeng Zhu, Yu Su
-
Patent number: 11093762Abstract: A method for validation of an obstacle candidate identified within a sequence of image frames comprises the following steps: A. for a current image frame of the sequence of image frames, determining within the current image frame a region of interest representing the obstacle candidate, dividing the region of interest into sub-regions, and, for each sub-region, determining a Time-To-Contact (TTC) based on at least the current image frame and a preceding or succeeding image frame of the sequence of image frames; B. determining one or more classification features based on the TTCs of the sub-regions determined for the current image frame; and C. classifying the obstacle candidate based on the determined one or more classification features.Type: GrantFiled: May 8, 2019Date of Patent: August 17, 2021Assignee: Aptiv Technologies LimitedInventors: Jan Siegemund, Christian Nunn
-
Patent number: 10943131Abstract: An image processing method includes: determining a candidate track in an image of a road, wherein the candidate track is modelled as a parameterized line or curve corresponding to a candidate lane marking in the image of a road; dividing the candidate track into a plurality of cells, each cell corresponding to a segment of the candidate track; determining at least one marklet for a plurality of said cells, wherein each marklet of a cell corresponds to a line or curve connecting left and right edges of the candidate lane marking; determining at least one local feature of each of said plurality of cells based on characteristics of said marklets; determining at least one global feature of the candidate track by aggregating the local features of the plurality of cells; and determining if the candidate lane marking represents a lane marking based on the at least one global feature.Type: GrantFiled: May 10, 2019Date of Patent: March 9, 2021Assignee: Aptiv Technologies LimitedInventors: Yu Su, Andre Paus, Kun Zhao, Mirko Meuter, Christian Nunn
-
Publication number: 20190370566Abstract: An image processing method includes: determining a candidate track in an image of a road, wherein the candidate track is modelled as a parameterized line or curve corresponding to a candidate lane marking in the image of a road; dividing the candidate track into a plurality of cells, each cell corresponding to a segment of the candidate track; determining at least one marklet for a plurality of said cells, wherein each marklet of a cell corresponds to a line or curve connecting left and right edges of the candidate lane marking; determining at least one local feature of each of said plurality of cells based on characteristics of said marklets; determining at least one global feature of the candidate track by aggregating the local features of the plurality of cells; and determining if the candidate lane marking represents a lane marking based on the at least one global feature.Type: ApplicationFiled: May 10, 2019Publication date: December 5, 2019Inventors: Yu Su, Andre Paus, Kun Zhao, Mirko Meuter, Christian Nunn
-
Publication number: 20190362163Abstract: A method for validation of an obstacle candidate identified within a sequence of image frames comprises the following steps: A. for a current image frame of the sequence of image frames, determining within the current image frame a region of interest representing the obstacle candidate, dividing the region of interest into sub-regions, and, for each sub-region, determining a Time-To-Contact (TTC) based on at least the current image frame and a preceding or succeeding image frame of the sequence of image frames; B. determining one or more classification features based on the TTCs of the sub-regions determined for the current image frame; and C. classifying the obstacle candidate based on the determined one or more classification features.Type: ApplicationFiled: May 8, 2019Publication date: November 28, 2019Inventors: Jan Siegemund, Christian Nunn
-
Publication number: 20190325306Abstract: A device for processing data sequences by means of a convolutional neural network is configured to carry out the following steps: receiving an input sequence comprising a plurality of data items captured over time using a sensor, each of said data items comprising a multi-dimensional representation of a scene, generating an output sequence representing the input sequence processed item-wise by the convolutional neural network, wherein generating the output sequence comprises: generating a grid-generation sequence based on a combination of the input sequence and an intermediate grid-generation sequence representing a past portion of the output sequence or the grid-generation sequence, generating a sampling grid on the basis of the grid-generation sequence, generating an intermediate output sequence by sampling from the past portion of the output sequence according to the sampling grid, and generating the output sequence based on a weighted combination of the intermediate output sequence and the input sequence.Type: ApplicationFiled: April 3, 2019Publication date: October 24, 2019Inventors: Weimeng Zhu, Yu Su, Christian Nunn
-
Publication number: 20190325241Abstract: A device for extracting dynamic information comprises a convolutional neural network, wherein the device is configured to receive a sequence of data blocks acquired over time, each of said data blocks comprising a multi-dimensional representation of a scene. The convolutional neural network is configured to receive the sequence as input and to output dynamic information on the scene in response, wherein the convolutional neural network comprises a plurality of modules, and wherein each of said modules is configured to carry out a specific processing task for extracting the dynamic information.Type: ApplicationFiled: April 3, 2019Publication date: October 24, 2019Inventors: Christian Nunn, Weimeng Zhu, Yu Su
-
Publication number: 20190034714Abstract: A system for detecting hand gestures in a 3D space comprises a 3D imaging unit. The processing unit generates a foreground map of the at least one 3D image by segmenting foreground from background and a 3D sub-image of the at least one 3D image that includes the image of a hand by scaling a 2D intensity image, a depth map and a foreground map of the at least one 3D image such that the 3D sub-image has a predetermined size and by rotating the 2D intensity image, the depth map and the foreground map of the at least one 3D image such that a principal axis of the hand is aligned to a predetermined axis in the 3D sub-image. Classifying a 3D image comprises distinguishing the hand in the 2D intensity image of the 3D sub-image from other body parts and other objects and/or verifying whether the hand has a configuration from a predetermined configuration catalogue. Further, the processing unit uses a convolutional neural network for the classification of the at least one 3D image.Type: ApplicationFiled: January 31, 2017Publication date: January 31, 2019Inventors: Alexander Barth, Christian Nunn, Dennis Mueller
-
Patent number: 10115026Abstract: A method for lane detection for a camera-based driver assistance system includes the following steps: image regions in images that are recorded by a camera are identified as detected lane markings if the image regions meet a specified detection criterion. At least two detected lane markings are subjected to a tracking process as lane markings to be tracked. By means of a recursive state estimator, separate progressions are estimated for at least two of the lane markings to be tracked. Furthermore, for each of a plurality of the detected lane markings, a particular offset value is determined, which indicates a transverse offset of the detected lane marking in relation to a reference axis. By means of an additional estimation method, the determined offset values are each associated with one of the separate progressions of the lane markings to be tracked.Type: GrantFiled: March 27, 2015Date of Patent: October 30, 2018Assignee: Delphi Technologies, Inc.Inventors: Mirko Mueter, Kun Zhao, Lech J. Szumilas, Dennis Mueller, Christian Nunn
-
Patent number: 9911056Abstract: In a method of generating a training image for teaching of a camera-based object recognition system suitable for use on an automated vehicle which shows an object to be recognized in a natural object environment, the training image is generated as a synthetic image by a combination of a base image taken by a camera and of a template image in that a structural feature is obtained from the base image and is replaced with a structural feature obtained from the template image by means of a shift-map algorithm.Type: GrantFiled: November 25, 2015Date of Patent: March 6, 2018Assignee: DELPHI TECHNOLOGIES, INC.Inventors: Anslem Haselhoff, Dennis Mueller, Mirko Meuter, Christian Nunn
-
Publication number: 20170068862Abstract: A method for lane detection for a camera-based driver assistance system includes the following steps: image regions in images that are recorded by a camera are identified as detected lane markings if the image regions meet a specified detection criterion. At least two detected lane markings are subjected to a tracking process as lane markings to be tracked. By means of a recursive state estimator, separate progressions are estimated for at least two of the lane markings to be tracked. Furthermore, for each of a plurality of the detected lane markings, a particular offset value is determined, which indicates a transverse offset of the detected lane marking in relation to a reference axis. By means of an additional estimation method, the determined offset values are each associated with one of the separate progressions of the lane markings to be tracked.Type: ApplicationFiled: March 27, 2015Publication date: March 9, 2017Inventors: Mirko Mueter, Kun Zhao, Lech J. Szumilas, Dennis Mueller, Christian Nunn