Patents by Inventor Wilhelmus Petrus Adrianus Johannus Michiels
Wilhelmus Petrus Adrianus Johannus Michiels has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230029578Abstract: A method is provided for watermarking a machine learning model used for object detection. In the method, a first subset of a labeled set of ML training samples is selected. Each of one or more objects in the first subset includes a class label. A pixel pattern is selected to use as a watermark in the first subset of images. The pixel pattern is made partially transparent. A target class label is selected. One or more objects of the first subset of images are relabeled with the target class label. In another embodiment, the class labels are removed from objects in the subset of images instead of relabeling them. Each of the first subset of images is overlaid with the partially transparent and scaled pixel pattern. The ML model is trained with the set of training images and the first subset of images to produce a trained and watermarked ML model.Type: ApplicationFiled: July 30, 2021Publication date: February 2, 2023Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Frederik Dirk Schalij
-
Patent number: 11501108Abstract: Various embodiments relate to a method of producing a machine learning model with a fingerprint that maps an input value to an output label, including: selecting a set of extra input values, wherein the set of extra input values does not intersect with a set of training labeled input values for the machine learning model; selecting a first set of artificially encoded output label values corresponding to each of the extra input values in the set of extra input values, wherein the first set of artificially encoded output label values are selected to indicate the fingerprint of a first machine learning model; and training the machine learning model using a combination of the extra input values with associated first set of artificially encoded output values and the set of training labeled input values to produce the first learning model with the fingerprint.Type: GrantFiled: July 24, 2018Date of Patent: November 15, 2022Assignee: NXP B.V.Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks, Marc Vauclair, Nikita Veshchikov
-
Patent number: 11501212Abstract: A method for protecting a first machine learning (ML) model is provided. In the method, a dataset of non-problem domain (NPD) data is selected from a large dataset using a second ML model. The second ML model classifies the large dataset into NPD classifications and PD classifications. The PD classified data is excluded. A distinguisher includes a third ML model that is trained using selected NPD data from the large dataset. The distinguisher receives input samples that are intended for the first ML model. The third ML model provides either a PD classification or NPD classification in response to receiving each input sample. An indication of a likely extraction attempt may be provided when a predetermined number of NPD classifications are provided. The method provides an efficient way to create a training dataset for a distinguisher and for protecting a ML model with the distinguisher.Type: GrantFiled: April 21, 2020Date of Patent: November 15, 2022Assignee: NXP B.V.Inventors: Christine van Vredendaal, Wilhelmus Petrus Adrianus Johannus Michiels
-
Patent number: 11468291Abstract: A method is provided for protecting a machine learning ensemble. In the method, a plurality of machine learning models is combined to form a machine learning ensemble. A plurality of data elements for training the machine learning ensemble is provided. The machine learning ensemble is trained using the plurality of data elements to produce a trained machine learning ensemble. During an inference operating phase, an input is received by the machine learning ensemble. A piecewise function is used to pseudo-randomly choose one of the plurality of machine learning models to provide an output in response to the input. The use of a piecewise function hides which machine learning model provided the output, making the machine learning ensemble more difficult to copy.Type: GrantFiled: September 28, 2018Date of Patent: October 11, 2022Assignee: NXP B.V.Inventors: Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks
-
Publication number: 20220292623Abstract: A method is provided for watermarking a machine learning model used for object detection or image classification. In the method, a first subset of a labeled set of ML training samples is selected. The first subset is of a predetermined class of images. In one embodiment, the first pixel pattern is selected and sized to have substantially the same dimensions as each sample of the first subset or each bounding box in the case of an object detector. Each sample of the first subset is relabeled to have a different label than the original label. An opacity of the pixel pattern may be adjusted independently for different parts of the pattern. The ML model is trained with the labeled set of ML training samples and the first subset of relabeled ML training samples. Using multiple different opacity factors provides both reliability and credibility to the watermark.Type: ApplicationFiled: March 12, 2021Publication date: September 15, 2022Inventors: Wilhelmus Petrus Adrianus Johannus MICHIELS, Frederik Dirk Schalij
-
Publication number: 20220261571Abstract: A method is described for analyzing an output of an object detector for a selected object of interest in an image. The object of interest in a first image is selected. A user of the object detector draws a bounding box around the object of interest. A first inference operation is run on the first image using the object detector, and in response, the object detect provides a plurality of proposals. A non-max suppression (NMS) algorithm is run on the plurality of proposals, including the proposal having the object of interest. A classifier and bounding box regressor are run on each proposal of the plurality of proposals and results are outputted. The outputted results are then analyzed. The method can provide insight into why an object detector returns the results that it does.Type: ApplicationFiled: February 16, 2021Publication date: August 18, 2022Inventors: Gerardus Antonius Franciscus DERKS, Wilhelmus Petrus Adrianus Johannus Michiels, Brian Ermans, Frederik Dirk Schalij
-
Patent number: 11410057Abstract: A method is provided for analyzing a classification in a machine learning model (ML). In the method, the ML model is trained using a training dataset to produce a trained ML model. One or more samples are provided to the trained ML model to produce one or more prediction classifications. A gradient is determined for the one of more samples at a predetermined layer of the trained ML model. The one or more gradients and the one or more prediction classifications for each sample are stored. Also, an intermediate value of the ML model may be stored. Then, a sample is chosen to analyze. A gradient of the sample is determined if the gradient was not already determined when the at least one gradient is determined. Using the at least one gradient, and one or more of a data structure, a predetermined metric, and an intermediate value, the k nearest neighbors to the sample are determined. A report comprising the sample and the k nearest neighbors may be provided for analysis.Type: GrantFiled: February 20, 2020Date of Patent: August 9, 2022Assignee: NXP B.V.Inventors: Brian Ermans, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
-
Publication number: 20220215103Abstract: A data processing system has a processor and a system memory. The system memory may be a dynamic random-access memory (DRAM). The processor includes an embedded memory. The system memory is coupled to the processor and is organized in a plurality of pages. A portion of the code or data stored in the plurality of memory pages is selected for permutation. A permutation order is generated and the memory pages containing the portion of code or data is permuted using a permutation order. The permutation order and/or a reverse permutation order to recover the original order may be stored in the embedded memory. Permuting the memory pages with a permutation order stored in the embedded memory prevents the code or data from being read during a freeze attack on the system memory in a way that is useful to an attacker.Type: ApplicationFiled: January 7, 2021Publication date: July 7, 2022Inventors: Wilhelmus Petrus Adrianus Johannus MICHIELS, Jan Hoogerbrugge, Ad Arts
-
Patent number: 11321456Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.Type: GrantFiled: May 16, 2019Date of Patent: May 3, 2022Assignee: NXP B.V.Inventors: Gerardus Antonius Franciscus Derks, Brian Ermans, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
-
Publication number: 20220129566Abstract: A data processing system includes a rich execution environment, a hardware accelerator, a trusted execution environment, and a memory. The REE includes a processor configured to execute an application. A compute kernel is executed on the hardware accelerator and the compute kernel performs computations for the application. The TEE provides relatively higher security than the REE and includes an accelerator controller for controlling operation of the hardware accelerator. The memory has an unsecure portion coupled to the REE and to the TEE, and a secure portion coupled to only the TEE. The secure portion is relatively more secure than the unsecure portion. Data that is to be accessed and used by the hardware accelerator is stored in the secure portion of the memory. In another embodiment, a method is provided for securely executing an application is the data processing system.Type: ApplicationFiled: October 26, 2020Publication date: April 28, 2022Inventors: Jan Hoogerbrugge, Wilhelmus Petrus Adrianus Johannus Michiels, Ad Arts
-
Publication number: 20220114002Abstract: A data processing system has a processor, a system memory, and a hypervisor. The system memory stores program code and data in a plurality of memory pages. The hypervisor controls SLAT (second level address translation) read, write, and execute access rights of the plurality of memory pages. A portion of the plurality of memory pages are classified as being in a secure enclave portion of the system memory and a portion is classified as being in an unsecure memory area. The portion of the memory pages classified in the secure enclave is encrypted and a hash is generated for each of the memory pages. During an access of a memory page, the hypervisor determines if the accessed memory page is in the secure enclave or in the unsecure memory area based on the hash. In another embodiment, a method for accessing a memory page in the secure enclave is provided.Type: ApplicationFiled: October 8, 2020Publication date: April 14, 2022Inventors: Jan HOOGERBRUGGE, Wilhelmus Petrus Adrianus Johannus MICHIELS
-
Patent number: 11270227Abstract: A method is provided for managing a machine learning system. In the method, a database is provided for storing a plurality of data elements. A plurality of machine learning models is trained using assigned subsets of the plurality of data elements. The outputs of the plurality of machine learning models is provided to an aggregator. During inference operation of the machine learning system, the aggregator determines a final output based on outputs from the plurality of models. If it is determined that an assigned subset must be changed because, for example, a record must be deleted, then the data element is removed from the selected assigned subset. The affected machine learning model associated with the changed assigned subset is removed, and retrained using the changed assigned subset.Type: GrantFiled: October 1, 2018Date of Patent: March 8, 2022Assignee: NXP B.V.Inventors: Nikita Veshchikov, Joppe Willem Bos, Wilhelmus Petrus Adrianus Johannus Michiels
-
Publication number: 20220067503Abstract: A method is provided for analyzing a similarly between classes of a plurality of classes in a trained machine learning model (ML). The method includes collecting weights of connections from each node of a first predetermined layer of a neural network (NN) to each node of a second predetermined layer of the NN to which the nodes of the first predetermined layer are connected. The collected weights are used to calculate distances from each node of the first predetermined layer to nodes of the second predetermined layer to which the first predetermined layer nodes are connected. The distances are compared to determine which classes the NN determines are similar. Two or more of the similar classes may then be analyzed using any of a variety of techniques to determine why the two or more classes of the NN were determined to be similar.Type: ApplicationFiled: August 26, 2020Publication date: March 3, 2022Inventors: Brian Ermans, Gerardus Antonius Franciscus Derks, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
-
Publication number: 20210406693Abstract: A method is described for analyzing data samples of a machine learning (ML) model to determine why the ML model classified a sample like it did. Two samples are chosen for analysis. The two samples may be nearest neighbors. Samples classified as nearest neighbors are typically samples that are more similar with respect to a predetermined criterion than other samples of a set of samples. In the method, a first set of features of a first sample and a second set of features of a second sample are collected. A set of overlapping features of the first and second sets of features is determined. Then, the set of overlapping features is analyzed using a predetermined visualization technique to determine why the ML model determined the first sample to be similar to the second sample.Type: ApplicationFiled: June 25, 2020Publication date: December 30, 2021Inventors: Christine VAN VREDENDAAL, Wilhelmus Petrus Adrianus Johannus Michiels, Gerardus Antonius Franciscus Derks, Brian Ermans
-
Patent number: 11206130Abstract: Various embodiments relate to a method of generating a shared secret for use in a symmetric cipher, including: receiving, by a processor, an encoded key Enc(K) and a white-box implementation of the symmetric cipher, where the encoded key Enc(K) is used in the white-box implementation; selecting, by the processor, homomorphic functions ? and ? and the values c1 and c3 such that Enc(K)?c1=Enc(K?c3); and transmitting, by the processor, ? and c3 to another device.Type: GrantFiled: July 31, 2018Date of Patent: December 21, 2021Assignee: NXP B.V.Inventors: Joppe Willem Bos, Rudi Verslegers, Wilhelmus Petrus Adrianus Johannus Michiels
-
Publication number: 20210326748Abstract: A method for protecting a first machine learning (ML) model is provided. In the method, a dataset of non-problem domain (NPD) data is selected from a large dataset using a second ML model. The second ML model classifies the large dataset into NPD classifications and PD classifications. The PD classified data is excluded. A distinguisher includes a third ML model that is trained using selected NPD data from the large dataset. The distinguisher receives input samples that are intended for the first ML model. The third ML model provides either a PD classification or NPD classification in response to receiving each input sample. An indication of a likely extraction attempt may be provided when a predetermined number of NPD classifications are provided. The method provides an efficient way to create a training dataset for a distinguisher and for protecting a ML model with the distinguisher.Type: ApplicationFiled: April 21, 2020Publication date: October 21, 2021Inventors: Christine van Vredendaal, Wilhelmus Petrus Adrianus Johannus Michiels
-
Publication number: 20210264295Abstract: A method is provided for analyzing a classification in a machine learning model (ML). In the method, the ML model is trained using a training dataset to produce a trained ML model. One or more samples are provided to the trained ML model to produce one or more prediction classifications. A gradient is determined for the one of more samples at a predetermined layer of the trained ML model. The one or more gradients and the one or more prediction classifications for each sample are stored. Also, an intermediate value of the ML model may be stored. Then, a sample is chosen to analyze. A gradient of the sample is determined if the gradient was not already determined when the at least one gradient is determined. Using the at least one gradient, and one or more of a data structure, a predetermined metric, and an intermediate value, the k nearest neighbors to the sample are determined. A report comprising the sample and the k nearest neighbors may be provided for analysis.Type: ApplicationFiled: February 20, 2020Publication date: August 26, 2021Inventors: Brian ERMANS, Wilhelmus Petrus Adrianus Johannus Michiels, Christine van Vredendaal
-
Publication number: 20210240803Abstract: A method is provided for watermarking a machine learning model. In the method, a first subset of a labeled set of ML training samples is selected. The first subset is of a predetermined class of images. A first pixel pattern is selected and inserted into each sample of the first subset. One or more of a location, position, orientation, and transformation of the first pixel pattern is varied for each of the samples. Each sample of the first subset is relabeled to have a different label than the original label. The ML model is trained with the labeled set of ML training samples and the first subset of relabeled ML training samples. To detect the watermark, a second subset of training samples is selected, and the first pixel pattern is inserted into each sample. The second subset is used during inference operation to detect the presence of the watermark.Type: ApplicationFiled: February 3, 2020Publication date: August 5, 2021Inventor: Wilhelmus Petrus Adrianus Johannus MICHIELS
-
Patent number: 10873459Abstract: A white-box system for authenticating a user-supplied password, including: a password database including a salt value and an authentication value for each user; a white-box implementation of a symmetric cipher configured to produce an encrypted value by encrypting the user-supplied password using the salt value associated with the user as an encoded secret key; and a comparator configured to compare the encrypted value with the authentication value associated with the user to verify the user-supplied password.Type: GrantFiled: September 24, 2018Date of Patent: December 22, 2020Assignee: NXP B.V.Inventors: Joppe Willem Bos, Rudi Verslegers, Wilhelmus Petrus Adrianus Johannus Michiels
-
Publication number: 20200364333Abstract: A method for protecting a machine learning (ML) model is provided. During inference operation of the ML model, a plurality of input samples is provided to the ML model. A distribution of a plurality of output predictions from a predetermined node in the ML model is measured. If the distribution of the plurality of output predictions indicates correct output category prediction with low confidence, then the machine learning model is slowed to reduce a prediction rate of subsequent output predictions. If the distribution of the plurality of categories indicates correct output category prediction with a high confidence, then the machine learning model is not slowed to reduce the prediction rate of subsequent output predictions of the machine learning model. A moving average of the distribution may be used to determine the speed reduction. This makes a cloning attack on the ML model take longer with minimal impact to a legitimate user.Type: ApplicationFiled: May 16, 2019Publication date: November 19, 2020Inventors: GERARDUS ANTONIUS FRANCISCUS DERKS, BRIAN ERMANS, Wilhelmus Petrus Adrianus Johannus Michiels, CHRISTINE van VREDENDAAL