Patents by Inventor Michelle KARG

Michelle KARG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240029444
    Abstract: The invention relates to correcting input image data, from a plurality of cameras of a vehicle panoramic-view system. includes: capturing input image data by the cameras, which are negatively influenced by rain, incident light and/or dirt, and providing to a trained artificial neural network, converting, by the trained network, the input image data into corrected output image data without negative influence, determining a certainty measure which is dependent on the degree of wetting by water, incident light and/or contamination for an image of the input image data, and characterizing the certainty of the trained network that the image correction of the network is accurate, and outputting, by the trained network, the output image data and the determined certainty measure. The method advantageously allows object recognition when cameras are fogged up and generation of an image data stream for human and computer vision from a network for an optimized correspondence search.
    Type: Application
    Filed: December 3, 2021
    Publication date: January 25, 2024
    Applicant: Continental Autonomous Mobility Germany GmbH
    Inventors: Christian Scharfenberger, Michelle Karg
  • Publication number: 20230394844
    Abstract: There is disclosed a method and a device for avoiding accidents caused by wild animals crossing at dusk and at night by a vehicle-mounted camera system. The method for the brightness conversion of input image data of the camera into output image data includes the following steps: a) capturing input image data of a current brightness of a roadway and an adjacent region to the side of the roadway by f a vehicle-mounted camera system at dusk or at night, b) converting the input image data into output image data with a different brightness by a trained artificial neural network, and c) outputting the output image data so that the output image data can be displayed to the driver of the vehicle for the purpose of avoiding accidents involving wild animals or so that a wild animal can be recognized from the output image data by an image recognition function.
    Type: Application
    Filed: October 19, 2021
    Publication date: December 7, 2023
    Applicant: Continental Autonomous Mobility Germany GmbH
    Inventors: Christian Scharfenberger, Michelle Karg
  • Publication number: 20230342894
    Abstract: The present disclosure relates to a machine learning method, to a method and to a device for converting input image data from a plurality of vehicle cameras of a panoramic-view system into optimized or enhanced output image data. The method for converting input image data from a plurality of vehicle cameras of a panoramic-view system into optimized or enhanced output image data includes input image data acquired by the vehicle cameras and having a current brightness or color distribution provided to a trained artificial neural network. The trained artificial neural network is configured to convert the input image data having the current brightness or color distribution into optimized or enhanced output image data having different output brightness or color distribution; and the trained artificial neural network is configured to output the output image data.
    Type: Application
    Filed: December 9, 2020
    Publication date: October 26, 2023
    Applicant: Continental Autonomous Mobility Germany GmbH
    Inventors: Christian Scharfenberger, Michelle Karg
  • Patent number: 11565721
    Abstract: The present invention relates to a computer-implemented method and a system for testing the output of a neural network (1) having a plurality of layers (11), which detects or classifies objects. The method comprises the step (S1) of reading at least one result from at least one first layer (11) and the confidence value thereof, which is generated in the first layer (11) of a neural network (1), and the step (S2) of checking a plausibility of the result by taking into consideration the confidence value thereof so as to conclude whether the object detection by the neural network (1) is correct or false. The step (S2) of checking comprises comparing the confidence value for the result with a predefined threshold value. In the event that it is concluded in the checking step (S2) that the object detection is false, output of the object falsely detected by the neural network is prevented.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: January 31, 2023
    Inventors: Christian Scharfenberger, Michelle Karg
  • Patent number: 11501162
    Abstract: A device is configured to classify data. Its operation involves providing (210) data samples including one or more of: image data, radar data, acoustic data, and/or lidar data to a processing unit. The data samples include at least one test sample, including positive samples and negative samples. Each positive sample has been determined to contain data relating to at least one object to be detected including at least one pedestrian, car, vehicle, truck or bicycle. Each negative sample has been determined not to contain data relating to the at least one object to be detected. These determinations regarding the positive samples and the negative samples are provided as input data, validated by a human operator, and/or provided by the device itself through a learning algorithm. A first plurality of groups is generated (220) by the processing unit implementing an artificial neural network, wherein at least some of the first plurality of groups are assigned a weighting factor.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: November 15, 2022
    Assignee: CONTI TEMIC MICROELECTRONIC GMBH
    Inventors: Michelle Karg, Christian Scharfenberger, Robert Thiel
  • Publication number: 20220174089
    Abstract: A method for identifying adversarial attacks on an image based detection system for automated driving includes providing a reference signal and a potentially manipulated signal. The method also includes calculating a plurality of metrics which quantify differences between the signals in different ways. The method further includes creating a multi-dimensional feature space based on the calculated metrics and classifying the type of attack based on the calculated metrics. The class of the adversarial attack may then be output.
    Type: Application
    Filed: March 17, 2020
    Publication date: June 2, 2022
    Applicant: Conti Temic microelectronic GmbH
    Inventors: Eric Piegert, Michelle Karg, Christian Scharfenberger
  • Patent number: 11132563
    Abstract: A method for identifying objects begins with capturing an image of an external environment of a vehicle using a capturing unit arranged on the vehicle. An object region within the image is located by a region proposal process, and an object within the object region is classified in a classification process. The region proposal process and/or the classification process is/are implemented in a convolutional neural network. External prior information highlighting a possible object region is supplied to the region proposal process.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: September 28, 2021
    Assignees: Conti Ternie microelectronic GmbH, Continental Teves AG & Co. OHG
    Inventors: Ann-Katrin Fattal, Michelle Karg, Christian Scharfenberger
  • Patent number: 11132530
    Abstract: A method for the three-dimensional graphic or pictorial reconstruction of a vehicle begins with capturing an image of at least one vehicle with a camera. A first rectangular border of the entire vehicle is captured in the image, to obtain a first rectangle. A second rectangular border of one side of a vehicle is captured in the image, to obtain a second rectangle, and it is determined whether the first and second rectangles are borders which relate to the same vehicle. If so, then it is determined whether a side orientation of the vehicle can be assigned from the two rectangles and, if so, the side orientation is determined. Finally, a three-dimensional reconstruction of the vehicle is performed from the first rectangle, the second rectangle and the side orientation.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: September 28, 2021
    Assignee: Conti Temic microelectronic GmbH
    Inventors: Michelle Karg, Olga Prokofyeva, Christian Scharfenberger, Robert Thiel
  • Publication number: 20210056388
    Abstract: The invention relates to a method for converting a first neural network with a first architecture into a second neural network with a second architecture for use in a vehicle controller in order to obtain the knowledge of the first neural network and transfer same to the second neural network. In a first step of the method, a conversion (701) of at least one layer of the first neural network into at least one layer of the second neural network is carried out. In a second step, a random initialization (702) of the at least one converted layer is carried out in the architecture of the second neural network. In a third step, a training process (703) of the at least one converted layer is carried out in the second neural network. In a fourth step, a fine-tuning process (704) of the non-converted layer is carried out in the second neural network or in the entire second neural network.
    Type: Application
    Filed: June 28, 2018
    Publication date: February 25, 2021
    Inventors: Michelle KARG, Christian SCHARFENBERGER
  • Patent number: 10824881
    Abstract: A device for object recognition of an input image includes: a patch selector configured to subdivide the input image into a plurality of zones and to define a plurality of patches for the zones; a voting maps generator configured to generate a set of voting maps for each zone and for each patch, and to binarize the generated set of voting maps; a voting maps combinator configured to combine the binarized set of voting maps; and a supposition generator configured to generate and refine a supposition out of or from the combined, binarized set of voting maps.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: November 3, 2020
    Assignees: Conti Temic microelectronic GmbH, Continental Teves AG & Co. OHG
    Inventors: Ann-Katrin Fattal, Michelle Karg, Christian Scharfenberger, Stefan Hegemann, Stefan Lueke, Chen Zhang
  • Publication number: 20200247433
    Abstract: The present invention relates to a computer-implemented method and a system for testing the output of a neural network (1) having a plurality of layers (11), which detects or classifies objects. The method comprises the step (S1) of reading at least one result from at least one first layer (11) and the confidence value thereof, which is generated in the first layer (11) of a neural network (1), and the step (S2) of checking a plausibility of the result by taking into consideration the confidence value thereof so as to conclude whether the object detection by the neural network (1) is correct or false. The step (S2) of checking comprises comparing the confidence value for the result with a predefined threshold value. In the event that it is concluded in the checking step (S2) that the object detection is false, output of the object falsely detected by the neural network is prevented.
    Type: Application
    Filed: October 4, 2018
    Publication date: August 6, 2020
    Inventors: Christian SCHARFENBERGER, Michelle KARG
  • Publication number: 20200026905
    Abstract: A method for the three-dimensional graphic or pictorial reconstruction of a vehicle begins with capturing an image of at least one vehicle with a camera. A first rectangular border of the entire vehicle is captured in the image, to obtain a first rectangle. A second rectangular border of one side of a vehicle is captured in the image, to obtain a second rectangle, and it is determined whether the first and second rectangles are borders which relate to the same vehicle. If so, then it is determined whether a side orientation of the vehicle can be assigned from the two rectangles and, if so, the side orientation is determined. Finally, a three-dimensional reconstruction of the vehicle is performed from the first rectangle, the second rectangle and the side orientation.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 23, 2020
    Inventors: Michelle KARG, Olga PROKOFYEVA, Christian SCHARFENBERGER, Robert THIEL
  • Publication number: 20190354783
    Abstract: The invention relates to a method for identifying objects (12) in an image (7) of a capturing unit (2). The method comprises capturing an image (7), in particular an image (7) of an external environment (3) of a vehicle (1) using a capturing unit (2) arranged on the vehicle (1). An object region (11) within the image (7) is located by means of a region proposal method (10), and a classification (13) of an object (12) within the object region (11) is carried out. Furthermore, the region proposal method (10) and/or the classification (13) is/are integrated into a convolutional neural network, and external prior information (8) highlighting a possible object region (11) is supplied to the convolutional neural network.
    Type: Application
    Filed: May 17, 2018
    Publication date: November 21, 2019
    Inventors: Ann-Katrin FATTAL, Michelle KARG, Christian SCHARFENBERGER
  • Publication number: 20190354860
    Abstract: A device is configured to classify data. Its operation involves providing (210) data samples including one or more of: image data, radar data, acoustic data, and/or lidar data to a processing unit. The data samples include at least one test sample, including positive samples and negative samples. Each positive sample has been determined to contain data relating to at least one object to be detected including at least one pedestrian, car, vehicle, truck or bicycle. Each negative sample has been determined not to contain data relating to the at least one object to be detected. These determinations regarding the positive samples and the negative samples are provided as input data, validated by a human operator, and/or provided by the device itself through a learning algorithm. A first plurality of groups is generated (220) by the processing unit implementing an artificial neural network, wherein at least some of the first plurality of groups are assigned a weighting factor.
    Type: Application
    Filed: December 8, 2017
    Publication date: November 21, 2019
    Inventors: Michelle KARG, Christian SCHARFENBERGER, Robert THIEL
  • Publication number: 20190332873
    Abstract: The present invention relates to a device (100) for object recognition of an input image (A), the device (100) comprising: a patch selector (10), which is configured to subdivide the input image (A) into a plurality of zones z(m) and to define a plurality of patches pj(m) for the zones z(m); a voting maps generator (20), which is configured to generate a set of voting maps V(m) for each zone m of the plurality of zones z(m) and for each patch pj of the plurality of patches pj(m) and to binarize the generated set of voting maps V(m); a voting maps combinator (30), which is configured to combine the binarized set of voting maps V(m); and a supposition generator (40), which is configured to generate a supposition out of or from the combined, binarized set of voting maps V(m) and to perform a refinement of the supposition out of or from the combined, binarized set of voting maps V(m).
    Type: Application
    Filed: June 20, 2017
    Publication date: October 31, 2019
    Applicants: Conti Temic micoelectronic GmbH, Continental Teves AG & Co. OHG
    Inventors: Ann-Katrin FATTAL, Michelle KARG, Christian SCHARFENBERGER, Stefan HEGEMANN, Stefan LUEKE, Chen ZHANG