Patents by Inventor Jan Mathias Koehler

Jan Mathias Koehler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11783190
    Abstract: A method for ascertaining an explanation map of an image. All those pixels of the image are highlighted which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is being selected in such a way that it selects a smallest possible subset of the pixels of the image as relevant. The explanation map leads to the same classification result as the image when the explanation map is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: October 10, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Joerg Wagner, Tobias Gindele, Jan Mathias Koehler, Jakob Thaddaeus Wiedemer, Leon Hetzel
  • Patent number: 11645828
    Abstract: A method for ascertaining an explanation map of an image, in which all those pixels of the image are changed which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is selected in such a way that a smallest possible subset of the pixels of the image are changed, and the explanation map preferably does not lead to the same classification result as the image when it is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.
    Type: Grant
    Filed: July 3, 2019
    Date of Patent: May 9, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Joerg Wagner, Tobias Gindele, Jan Mathias Koehler, Jakob Thaddaeus Wiedemer, Leon Hetzel
  • Patent number: 11531888
    Abstract: A method for creating a deep neural network. The deep neural network includes a plurality of layers and connections having weights, and the weights in the created deep neural network are able to assume only predefinable discrete values from a predefinable list of discrete values. The method includes: providing at least one training input variable for the deep neural network; ascertaining a variable characterizing a cost function, which includes a first variable, which characterizes a deviation of an output variable of the deep neural network ascertained as a function of the provided training input variable relative to a predefinable setpoint output variable, and the variable characterizing the cost function further including at least one penalization variable, which characterizes a deviation of a value of one of the weights from at least one of at least two of the predefinable discrete values; training the deep neural network.
    Type: Grant
    Filed: October 15, 2018
    Date of Patent: December 20, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Jan Achterhold, Jan Mathias Koehler, Tim Genewein
  • Patent number: 11488006
    Abstract: An encoder, connectable to a data-memory, for storing numerical values in the data-memory, which lie in a value range between a predefined-minimum-value and a predefined-maximum-value, the encoder including an assignment instruction, according to which the value range is subdivided into multiple discrete intervals, and the encoder being configured to classify a numerical value to be stored in exactly one interval and to output an identifier of this interval, the intervals varying in width on the scale of the numerical values. A decoder for numerical values, which are stored in a data-memory using an encoder, to assign according to one assignment instruction an identifier of a discrete interval retrieved from the data-memory a fixed numerical value belonging to this interval and to output it. Also described are an AI module including an ANN, an encoder and a decoder, and a method for manufacturing the AI module, and an associated computer program.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: November 1, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Jan Mathias Koehler, Rolf Michael Koehler
  • Publication number: 20220230054
    Abstract: A method for operating a trainable module. At least one input variable value is supplied to variations of the trainable module, the variations differing so much from each other, that they may not be converted into each other in a congruent manner, using progressive learning. A measure of the uncertainty of the output variable values is ascertained from the difference of the output variable values, into which the variations translate, in each instance, the input variable value. The uncertainty is compared to a distribution of uncertainties, which is ascertained for input variable learning values used during training of the trainable module and/or for further input variable test values, to which relationships learned during the training of the trainable module are applicable. The extent to which the relationships learned during the training of the trainable module are applicable to the input variable value, is evaluated from the result of the comparison.
    Type: Application
    Filed: June 10, 2020
    Publication date: July 21, 2022
    Inventors: Jan Mathias Koehler, Maximilian Autenrieth, William Harris Beluch
  • Publication number: 20220147869
    Abstract: A method for training a trainable module. A plurality of modifications of the trainable module, which differ from one another enough that they are not congruently merged into one another with progressive learning, are each pretrained using a subset of the learning data sets. Learning input variable values of a learning data set are supplied to all modifications as input variables; from the deviation of the output variable values, into which the modifications each convert the learning input variable values, from one another, a measure of the uncertainty of these output variable values is ascertained and associated with the learning data set as its uncertainty. Based on the uncertainty, an assessment of the learning data set is ascertained, which is a measure of the extent to which the association of the learning output variable values with the learning input variable values in the learning data set is accurate.
    Type: Application
    Filed: April 8, 2020
    Publication date: May 12, 2022
    Inventors: Jan Mathias Koehler, Maximilian Autenrieth, William Harris Beluch
  • Publication number: 20210342650
    Abstract: A method for processing of learning data sets for a classifier. The method includes: processing learning input variable values of at least one learning data set multiple times in a non-congruent manner by one or multiple classifier(s) trained up to an epoch E2 so that they are mapped to different output variable values; ascertaining a measure for the uncertainty of these output variable values from the deviations of these output variable values; in response to the uncertainty meeting a predefined criterion, ascertaining at least one updated learning output variable value for the learning data set from one or multiple further output variable value(s) to which the classifier or the classifiers map(s) the learning input variable values after a reset to an earlier training level with epoch E1<E2.
    Type: Application
    Filed: April 16, 2021
    Publication date: November 4, 2021
    Inventors: William Harris Beluch, Jan Mathias Koehler, Maximilian Autenrieth
  • Publication number: 20210342653
    Abstract: A method for ascertaining an explanation map of an image, in which all those pixels of the image are changed which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is selected in such a way that a smallest possible subset of the pixels of the image are changed, and the explanation map preferably does not lead to the same classification result as the image when it is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.
    Type: Application
    Filed: July 3, 2019
    Publication date: November 4, 2021
    Inventors: Joerg Wagner, Tobias Gindele, Jan Mathias Koehler, Jakob Thaddaeus Wiedemer, Leon Hetzel
  • Publication number: 20210279529
    Abstract: A method for ascertaining an explanation map of an image. All those pixels of the image are highlighted which are significant for a classification of the image ascertained with the aid of a deep neural network. The explanation map is being selected in such a way that it selects a smallest possible subset of the pixels of the image as relevant. The explanation map leads to the same classification result as the image when the explanation map is supplied to the deep neural network for classification. The explanation map is selected in such a way that an activation caused by the explanation map does not essentially exceed an activation caused by the image in feature maps of the deep neural network.
    Type: Application
    Filed: July 3, 2019
    Publication date: September 9, 2021
    Inventors: Joerg Wagner, Tobias Gindele, Jan Mathias Koehler, Jakob Thaddaeus Wiedemer, Leon Hetzel
  • Publication number: 20200342315
    Abstract: A method for creating a deep neural network. The deep neural network includes a plurality of layers and connections having weights, and the weights in the created deep neural network are able to assume only predefinable discrete values from a predefinable list of discrete values. The method includes: providing at least one training input variable for the deep neural network; ascertaining a variable characterizing a cost function, which includes a first variable, which characterizes a deviation of an output variable of the deep neural network ascertained as a function of the provided training input variable relative to a predefinable setpoint output variable, and the variable characterizing the cost function further including at least one penalization variable, which characterizes a deviation of a value of one of the weights from at least one of at least two of the predefinable discrete values; training the deep neural network.
    Type: Application
    Filed: October 15, 2018
    Publication date: October 29, 2020
    Inventors: Jan Achterhold, Jan Mathias Koehler, Tim Genewein
  • Publication number: 20200175356
    Abstract: An encoder, connectable to a data-memory, for storing numerical values in the data-memory, which lie in a value range between a predefined-minimum-value and a predefined-maximum-value, the encoder including an assignment instruction, according to which the value range is subdivided into multiple discrete intervals, and the encoder being configured to classify a numerical value to be stored in exactly one interval and to output an identifier of this interval, the intervals varying in width on the scale of the numerical values. A decoder for numerical values, which are stored in a data-memory using an encoder, to assign according to one assignment instruction an identifier of a discrete interval retrieved from the data-memory a fixed numerical value belonging to this interval and to output it. Also described are an AI module including an ANN, an encoder and a decoder, and a method for manufacturing the AI module, and an associated computer program.
    Type: Application
    Filed: November 20, 2019
    Publication date: June 4, 2020
    Inventors: Jan Mathias Koehler, Rolf Michael Koehler
  • Patent number: 10402509
    Abstract: In a method for calculating a gradient of a data-based function model, having one or multiple accumulated data-based partial function models, e.g., Gaussian process models, a model calculation unit is provided, which is designed to calculate function values of the data-based function model having an exponential function, summation functions, and multiplication functions in two loop operations in a hardware-based way, the model calculation unit being used to calculate the gradient of the data-based function model for a desired value of a predefined input variable.
    Type: Grant
    Filed: December 2, 2014
    Date of Patent: September 3, 2019
    Assignee: Robert Bosch GmbH
    Inventors: Michael Hanselmann, Jan Mathias Koehler, Heiner Markert
  • Publication number: 20150154329
    Abstract: In a method for calculating a gradient of a data-based function model, having one or multiple accumulated data-based partial function models, e.g., Gaussian process models, a model calculation unit is provided, which is designed to calculate function values of the data-based function model having an exponential function, summation functions, and multiplication functions in two loop operations in a hardware-based way, the model calculation unit being used to calculate the gradient of the data-based function model for a desired value of a predefined input variable.
    Type: Application
    Filed: December 2, 2014
    Publication date: June 4, 2015
    Inventors: Michael Hanselmann, Jan Mathias Koehler, Heiner Markert