Patents by Inventor Brian Ermans

Brian Ermans has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961314
    Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: April 16, 2024
    Assignee: NXP B.V.
    Inventors: Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Brian Ermans, Frederik Dirk Schalij
  • Publication number: 20230040470
    Abstract: A method is provided for generating a visualization for explaining a behavior of a machine learning (ML) model. In the method, an image is input to the ML model for an inference operation. The input image has an increased resolution compared to an image resolution the ML model was intended to receive as an input. A resolution of a plurality of resolution-independent convolutional layers of the neural network are adjusted because of the increased resolution of the input image. A resolution-independent convolutional layer of the neural network is selected. The selected resolution-independent convolutional layer is used to generate a plurality of activation maps. The plurality of activation maps is used in a visualization method to show what features of the image were important for the ML model to derive an inference conclusion. The method may be implemented in a computer program having instructions executable by a processor.
    Type: Application
    Filed: August 9, 2021
    Publication date: February 9, 2023
    Inventors: Brian Ermans, Peter Doliwa, Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Frederik Dirk Schalij
  • Patent number: 11501206
    Abstract: A method and machine learning system for detecting adversarial examples is provided. A first machine learning model is trained with a first machine learning training data set having only training data samples with robust features. A second machine learning model is trained with a second machine learning training data set, the second machine learning training data set having only training data samples with non-robust features. A feature is a distinguishing element in a data sample. A robust feature is more resistant to adversarial perturbations than a non-robust feature. A data sample is provided to each of the first and second trained machine learning models during an inference operation. if the first trained machine learning model classifies the data sample with high confidence, and the second trained machine learning model classifies the data sample differently with a high confidence, then the data sample is determined to be an adversarial example.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: November 15, 2022
    Assignee: NXP B.V.
    Inventors: Brian Ermans, Peter Doliwa, Christine van Vredendaal
  • Publication number: 20220261571
    Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.
    Type: Application
    Filed: February 16, 2021
    Publication date: August 18, 2022
    Inventors: Gerardus Antonius Franciscus DERKS, Wilhelmus Petrus Adrianus Johannus Michiels, Brian Ermans, Frederik Dirk Schalij
  • Patent number: 11410057
    Abstract: A method is provided for analyzing a classification in a machine learning model (ML). In the method, the ML model is trained using a training dataset to produce a trained ML model. One or more samples are provided to the trained ML model to produce one or more prediction classifications. A gradient is determined for the one of more samples at a predetermined layer of the trained ML model. The one or more gradients and the one or more prediction classifications for each sample are stored. Also, an intermediate value of the ML model may be stored. Then, a sample is chosen to analyze. A gradient of the sample is determined if the gradient was not already determined when the at least one gradient is determined. Using the at least one gradient, and one or more of a data structure, a predetermined metric, and an intermediate value, the k nearest neighbors to the sample are determined. A report comprising the sample and the k nearest neighbors may be provided for analysis.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: August 9, 2022
    Assignee: NXP B.V.
    Inventors: Brian Ermans, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
  • Patent number: 11410078
    Abstract: A method and data processing system for making a machine learning model more resistant to adversarial examples are provided. In the method, an input for a machine learning model is provided. A randomly generated mask is added to the input to produce a modified input. The modified input is provided to the machine learning model. The randomly generated mask negates the effect of a perturbation added to the input for causing the input to be an adversarial example. The method may be implemented using the data processing system.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 9, 2022
    Assignee: NXP B.V.
    Inventors: Joppe Willem Bos, Simon Johann Friedberger, Christiaan Kuipers, Vincent Verneuil, Nikita Veshchikov, Christine Van Vredendaal, Brian Ermans
  • Patent number: 11321456
    Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 3, 2022
    Assignee: NXP B.V.
    Inventors: Gerardus Antonius Franciscus Derks, Brian Ermans, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
  • Publication number: 20220067503
    Abstract: A method is provided for analyzing a similarly between classes of a plurality of classes in a trained machine learning model (ML). The method includes collecting weights of connections from each node of a first predetermined layer of a neural network (NN) to each node of a second predetermined layer of the NN to which the nodes of the first predetermined layer are connected. The collected weights are used to calculate distances from each node of the first predetermined layer to nodes of the second predetermined layer to which the first predetermined layer nodes are connected. The distances are compared to determine which classes the NN determines are similar. Two or more of the similar classes may then be analyzed using any of a variety of techniques to determine why the two or more classes of the NN were determined to be similar.
    Type: Application
    Filed: August 26, 2020
    Publication date: March 3, 2022
    Inventors: Brian Ermans, Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
  • Publication number: 20210406693
    Abstract: A method is described for analyzing data samples of a machine learning (ML) model to determine why the ML model classified a sample like it did. Two samples are chosen for analysis. The two samples may be nearest neighbors. Samples classified as nearest neighbors are typically samples that are more similar with respect to a predetermined criterion than other samples of a set of samples. In the method, a first set of features of a first sample and a second set of features of a second sample are collected. A set of overlapping features of the first and second sets of features is determined. Then, the set of overlapping features is analyzed using a predetermined visualization technique to determine why the ML model determined the first sample to be similar to the second sample.
    Type: Application
    Filed: June 25, 2020
    Publication date: December 30, 2021
    Inventors: Christine VAN VREDENDAAL, Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks, Brian Ermans
  • Publication number: 20210264295
    Abstract: A method is provided for analyzing a classification in a machine learning model (ML). In the method, the ML model is trained using a training dataset to produce a trained ML model. One or more samples are provided to the trained ML model to produce one or more prediction classifications. A gradient is determined for the one of more samples at a predetermined layer of the trained ML model. The one or more gradients and the one or more prediction classifications for each sample are stored. Also, an intermediate value of the ML model may be stored. Then, a sample is chosen to analyze. A gradient of the sample is determined if the gradient was not already determined when the at least one gradient is determined. Using the at least one gradient, and one or more of a data structure, a predetermined metric, and an intermediate value, the k nearest neighbors to the sample are determined. A report comprising the sample and the k nearest neighbors may be provided for analysis.
    Type: Application
    Filed: February 20, 2020
    Publication date: August 26, 2021
    Inventors: Brian ERMANS, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
  • Publication number: 20210089957
    Abstract: A method and machine learning system for detecting adversarial examples is provided. A first machine learning model is trained with a first machine learning training data set having only training data samples with robust features. A second machine learning model is trained with a second machine learning training data set, the second machine learning training data set having only training data samples with non-robust features. A feature is a distinguishing element in a data sample. A robust feature is more resistant to adversarial perturbations than a non-robust feature. A data sample is provided to each of the first and second trained machine learning models during an inference operation. if the first trained machine learning model classifies the data sample with high confidence, and the second trained machine learning model classifies the data sample differently with a high confidence, then the data sample is determined to be an adversarial example.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 25, 2021
    Inventors: Brian Ermans, Peter Doliwa, Christine van Vredendaal
  • Publication number: 20210019663
    Abstract: A method for processing information includes transforming first information based on a first function, transforming second information based on a second function, processing the first transformed information using a first machine-learning model to generate a first result, processing the second transformed information using a second machine-learning model to generate a second result, and aggregating the first result and the second result to generate a decision. The first and second information may be the same information. The first function may be different from the second function. The first machine-learning model may be based on a first algorithm, and the second machine-learning algorithm may be based on a second algorithm.
    Type: Application
    Filed: July 16, 2019
    Publication date: January 21, 2021
    Inventors: Nikita VESHCHIKOV, Joppe Willem BOS, Simon Johann FRIEDBERGER, Brian ERMANS
  • Publication number: 20200364333
    Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.
    Type: Application
    Filed: May 16, 2019
    Publication date: November 19, 2020
    Inventors: GERARDUS ANTONIUS FRANCISCUS DERKS, BRIAN ERMANS, Wilhelmus Petrus Adrianus Johannus Michiels, CHRISTINE van VREDENDAAL
  • Publication number: 20200293941
    Abstract: A method and data processing system for making a machine learning model more resistant to adversarial examples are provided. In the method, an input for a machine learning model is provided. A randomly generated mask is added to the input to produce a modified input. The modified input is provided to the machine learning model. The randomly generated mask negates the effect of a perturbation added to the input for causing the input to be an adversarial example. The method may be implemented using the data processing system.
    Type: Application
    Filed: March 11, 2019
    Publication date: September 17, 2020
    Inventors: Joppe Willem Bos, Simon Johann Friedberger, Christiaan Kuipers, Vincent Verneuil, Nikita Veshchikov, Christine Van Vredendaal, Brian Ermans