Patents by Inventor Franciscus Derks
Franciscus Derks has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961314Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.Type: GrantFiled: February 16, 2021Date of Patent: April 16, 2024Assignee: NXP B.V.Inventors: Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Brian Ermans, Frederik Dirk Schalij
-
Publication number: 20230040470Abstract: A method is provided for generating a visualization for explaining a behavior of a machine learning (ML) model. In the method, an image is input to the ML model for an inference operation. The input image has an increased resolution compared to an image resolution the ML model was intended to receive as an input. A resolution of a plurality of resolution-independent convolutional layers of the neural network are adjusted because of the increased resolution of the input image. A resolution-independent convolutional layer of the neural network is selected. The selected resolution-independent convolutional layer is used to generate a plurality of activation maps. The plurality of activation maps is used in a visualization method to show what features of the image were important for the ML model to derive an inference conclusion. The method may be implemented in a computer program having instructions executable by a processor.Type: ApplicationFiled: August 9, 2021Publication date: February 9, 2023Inventors: Brian Ermans, Peter Doliwa, Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Frederik Dirk Schalij
-
Patent number: 11501108Abstract: Various embodiments relate to a method of producing a machine learning model with a fingerprint that maps an input value to an output label, including: selecting a set of extra input values, wherein the set of extra input values does not intersect with a set of training labeled input values for the machine learning model; selecting a first set of artificially encoded output label values corresponding to each of the extra input values in the set of extra input values, wherein the first set of artificially encoded output label values are selected to indicate the fingerprint of a first machine learning model; and training the machine learning model using a combination of the extra input values with associated first set of artificially encoded output values and the set of training labeled input values to produce the first learning model with the fingerprint.Type: GrantFiled: July 24, 2018Date of Patent: November 15, 2022Assignee: NXP B.V.Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks, Marc Vauclair, Nikita Veshchikov
-
Patent number: 11468291Abstract: A method is provided for protecting a machine learning ensemble. In the method, a plurality of machine learning models is combined to form a machine learning ensemble. A plurality of data elements for training the machine learning ensemble is provided. The machine learning ensemble is trained using the plurality of data elements to produce a trained machine learning ensemble. During an inference operating phase, an input is received by the machine learning ensemble. A piecewise function is used to pseudo-randomly choose one of the plurality of machine learning models to provide an output in response to the input. The use of a piecewise function hides which machine learning model provided the output, making the machine learning ensemble more difficult to copy.Type: GrantFiled: September 28, 2018Date of Patent: October 11, 2022Assignee: NXP B.V.Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks
-
Publication number: 20220261571Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.Type: ApplicationFiled: February 16, 2021Publication date: August 18, 2022Inventors: Gerardus Antonius Franciscus DERKS, Wilhelmus Petrus Adrianus Johannus Michiels, Brian Ermans, Frederik Dirk Schalij
-
Patent number: 11321456Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.Type: GrantFiled: May 16, 2019Date of Patent: May 3, 2022Assignee: NXP B.V.Inventors: Gerardus Antonius Franciscus Derks, Brian Ermans, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
-
Publication number: 20220067503Abstract: A method is provided for analyzing a similarly between classes of a plurality of classes in a trained machine learning model (ML). The method includes collecting weights of connections from each node of a first predetermined layer of a neural network (NN) to each node of a second predetermined layer of the NN to which the nodes of the first predetermined layer are connected. The collected weights are used to calculate distances from each node of the first predetermined layer to nodes of the second predetermined layer to which the first predetermined layer nodes are connected. The distances are compared to determine which classes the NN determines are similar. Two or more of the similar classes may then be analyzed using any of a variety of techniques to determine why the two or more classes of the NN were determined to be similar.Type: ApplicationFiled: August 26, 2020Publication date: March 3, 2022Inventors: Brian Ermans, Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
-
Publication number: 20210406693Abstract: A method is described for analyzing data samples of a machine learning (ML) model to determine why the ML model classified a sample like it did. Two samples are chosen for analysis. The two samples may be nearest neighbors. Samples classified as nearest neighbors are typically samples that are more similar with respect to a predetermined criterion than other samples of a set of samples. In the method, a first set of features of a first sample and a second set of features of a second sample are collected. A set of overlapping features of the first and second sets of features is determined. Then, the set of overlapping features is analyzed using a predetermined visualization technique to determine why the ML model determined the first sample to be similar to the second sample.Type: ApplicationFiled: June 25, 2020Publication date: December 30, 2021Inventors: Christine VAN VREDENDAAL, Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks, Brian Ermans
-
Publication number: 20200364333Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.Type: ApplicationFiled: May 16, 2019Publication date: November 19, 2020Inventors: GERARDUS ANTONIUS FRANCISCUS DERKS, BRIAN ERMANS, Wilhelmus Petrus Adrianus Johannus Michiels, CHRISTINE van VREDENDAAL
-
Patent number: 10769310Abstract: A method for protecting a machine learning model from copying is provided. The method includes providing a neural network architecture having an input layer, a plurality of hidden layers, and an output layer. Each of the plurality of hidden layers has a plurality of nodes. A neural network application is provided to run on the neural network architecture. First and second types of activation functions are provided. Activation functions including a combination of the first and second types of activation functions are provided to the plurality of nodes of the plurality of hidden layers. The neural network application is trained with a training set to generate a machine learning model. Using the combination of first and second types of activation functions makes it more difficult for an attacker to copy the machine learning model. Also, the neural network application may be implemented in hardware to prevent easy illegitimate upgrading of the neural network application.Type: GrantFiled: July 20, 2018Date of Patent: September 8, 2020Assignee: NXP B.V.Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks
-
Publication number: 20200104673Abstract: A method is provided for protecting a machine learning ensemble. In the method, a plurality of machine learning models is combined to form a machine learning ensemble. A plurality of data elements for training the machine learning ensemble is provided. The machine learning ensemble is trained using the plurality of data elements to produce a trained machine learning ensemble. During an inference operating phase, an input is received by the machine learning ensemble. A piecewise function is used to pseudo-randomly choose one of the plurality of machine learning models to provide an output in response to the input. The use of a piecewise function hides which machine learning model provided the output, making the machine learning ensemble more difficult to copy.Type: ApplicationFiled: September 28, 2018Publication date: April 2, 2020Inventors: WILHELMUS PETRUS ADRIANUS JOHANNUS MICHIELS, Gerardus Antonius Franciscus Derks
-
Publication number: 20200034663Abstract: Various embodiments relate to a method of producing a machine learning model with a fingerprint that maps an input value to an output label, including: selecting a set of extra input values, wherein the set of extra input values does not intersect with a set of training labeled input values for the machine learning model; selecting a first set of artificially encoded output label values corresponding to each of the extra input values in the set of extra input values, wherein the first set of artificially encoded output label values are selected to indicate the fingerprint of a first machine learning model; and training the machine learning model using a combination of the extra input values with associated first set of artificially encoded output values and the set of training labeled input values to produce the first learning model with the fingerprint.Type: ApplicationFiled: July 24, 2018Publication date: January 30, 2020Inventors: Wilhelmus Petrus Adrianus Johannus MICHIELS, Gerardus Antonius Franciscu Derks, Marc Vauclair, Nikita Veshchikov
-
Publication number: 20200026885Abstract: A method for protecting a machine learning model from copying is provided. The method includes providing a neural network architecture having an input layer, a plurality of hidden layers, and an output layer. Each of the plurality of hidden layers has a plurality of nodes. A neural network application is provided to run on the neural network architecture. First and second types of activation functions are provided. Activation functions including a combination of the first and second types of activation functions are provided to the plurality of nodes of the plurality of hidden layers. The neural network application is trained with a training set to generate a machine learning model. Using the combination of first and second types of activation functions makes it more difficult for an attacker to copy the machine learning model. Also, the neural network application may be implemented in hardware to prevent easy illegitimate upgrading of the neural network application.Type: ApplicationFiled: July 20, 2018Publication date: January 23, 2020Inventors: WILHELMUS PETRUS ADRIANUS JOHANNUS MICHIELS, GERARDUS ANTONIUS FRANCISCUS DERKS
-
Patent number: 10076485Abstract: The invention relates to hair care compositions comprising an effective amount of a condensation polymer having at least one, optionally quaternized or protonated, dialkylamide end-group connected through the polymer backbone to a unit derived from an alkylamide, the connection comprising at least one ester linkage.Type: GrantFiled: August 8, 2014Date of Patent: September 18, 2018Assignee: DSM IP ASSETS B.V.Inventors: Raphael Beumer, Franciscus Derks, Christine Mendrok-Edinger
-
Patent number: 8951509Abstract: The invention relates to novel optionally quaternized or protonated condensation polymers having at least one heterocyclic end-group connected to the polymer backbone through a unit derived from an alkylamide, the connection comprising an optionally substituted ethylene group and to the use thereof in body care products and household products.Type: GrantFiled: February 23, 2007Date of Patent: February 10, 2015Assignee: DSM IP Assets B.V.Inventors: Raphael Beumer, Franciscus Derks, Christine Mendrok
-
Publication number: 20140348771Abstract: The invention relates to hair care compositions comprising an effective amount of a condensation polymer having at least one, optionally quarternized or protonated, dialkylamide end-group connected through the polymer backbone to a unit derived from an alkylamide, the connection comprising at least one ester linkage.Type: ApplicationFiled: August 8, 2014Publication date: November 27, 2014Inventors: Raphael BEUMER, Franciscus Derks, Christine Mendrok
-
Patent number: 8815225Abstract: The invention relates to hair care compositions comprising an effective amount of a condensation polymer having at least one, optionally quarternized or protonated, dialkylamide end-group connected through the polymer backbone to a unit derived from an alkylamide, the connection comprising at least one ester linkage.Type: GrantFiled: February 23, 2007Date of Patent: August 26, 2014Assignee: DSM IP Assets B.V.Inventors: Raphael Beumer, Franciscus Derks, Christine Mendrok
-
Patent number: 8597625Abstract: The invention relates to aqueous compositions comprising an anionic surfactant and a hyperbranched polyesteramide comprising at least one secondary amide bond and having at least one quarternized amine end-group. The compositions, in particular in the form of shampoo preparations are suited for increasing the volume of hair treated therewith. Furthermore, such compositions provide styling attributes and increase the wet-combability of hair.Type: GrantFiled: June 19, 2009Date of Patent: December 3, 2013Assignee: DSM IP Assets B.V.Inventors: Franciscus Derks, Stephen Foster, Robert Lochhead, Adarsh Maini, Dirk Weber
-
Publication number: 20110182843Abstract: The invention relates to aqueous compositions comprising an anionic surfactant and a hyperbranched polyesteramide comprising at least one secondary amide bond and having at least one quarternized amine end-group. The compositions, in particular in the form of shampoo preparations are suited for increasing the volume of hair treated therewith. Furthermore, such compositions provide styling attributes and increase the wet-combability of hair.Type: ApplicationFiled: June 19, 2009Publication date: July 28, 2011Inventors: Franciscus Derks, Stephen Foster, Robert Lochhead, Adarsh Maini, Dirk Weber
-
Publication number: 20110165107Abstract: The invention relates to aqueous compositions comprising an anionic surfactant and a hyperbranched polyesteramide having at least one quarternized amine end-group. The compositions, in particular in the form of shampoo preparations are suited for increasing the volume of hair treated therewith. Furthermore, such compositions provide styling attributes and increase the wet-combability of hair.Type: ApplicationFiled: June 19, 2009Publication date: July 7, 2011Inventors: Franciscus Derks, Stephen Foster, Robert Lochhead, Adarsh Maini, Dirk Weber