Patents by Inventor David Jonathan Miller
David Jonathan Miller has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240144097Abstract: The disclosed embodiments disclose techniques for performing universal post-training backdoor detection and mitigation for classifiers. Mitigation of overfitting for a trained classifier begins with receiving the trained classifier and a clean dataset that spans a plurality of classes for the trained classifier. A set of input patterns are used to calculate classification margins for the trained classifier, and maximum classification margins are calculated for one or more classes of the trained classifier. Overfitting can then be mitigated by reducing one or more of these calculated maximum classification margins while maintaining the accuracy of the trained classifier for the clean dataset. In some embodiments, a backdoor detector may also detect target classes for a putative backdoor in the trained classifier upon detecting that the corresponding maximum classification margins for those target classes are anomalously high compared to the maximum classification margins of other classes.Type: ApplicationFiled: October 12, 2023Publication date: May 2, 2024Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis, Hang Wang
-
Patent number: 11704409Abstract: This patent concerns novel technology for detecting backdoors in neural network, particularly deep neural network (DNN) classification or prediction/regression models. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to an input sample from a source class of the attack, the backdoor pattern causes the decision of the neural network to change to the attacker's target class in the case of classification, or causes the output of the network to significantly change in the case of prediction or regression. The backdoors under consideration are small in norm so as to be imperceptible to a human or otherwise innocuous/evasive, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are specifically dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier or predictor/regressor may not be available.Type: GrantFiled: May 2, 2021Date of Patent: July 18, 2023Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 11609990Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: GrantFiled: August 25, 2020Date of Patent: March 21, 2023Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 11514297Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: GrantFiled: May 27, 2020Date of Patent: November 29, 2022Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 11475130Abstract: Embodiments of the present invention concern detecting Test-Time Evasion (TTE) attacks on neural network, particularly deep neural network (DNN), classifiers. The manner of detection is similar to that used to detect backdoors of a classifier whose training dataset was poisoned. Given knowledge of the classifier itself, the adversary subtly (even imperceptibly) perturbs their input to the classifier at test time in order to cause the class decision to change from a source class to a target class. For example, an image of a person who is unauthorized to access a resource can be modified slightly so that the classifier decides the image is that of an authorized person. The detector is based on employing a method (similar to that used to detect backdoors in DNNs) to discover different such minimal perturbations for each in a set of clean (correctly classified) samples, to change the sample's ground-truth (source) class to every other (target) class.Type: GrantFiled: September 25, 2020Date of Patent: October 18, 2022Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20210256125Abstract: This patent concerns novel technology for detecting backdoors in neural network, particularly deep neural network (DNN) classification or prediction/regression models. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to an input sample from a source class of the attack, the backdoor pattern causes the decision of the neural network to change to the attacker's target class in the case of classification, or causes the output of the network to significantly change in the case of prediction or regression. The backdoors under consideration are small in norm so as to be imperceptible to a human or otherwise innocuous/evasive, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are specifically dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier or predictor/regressor may not be available.Type: ApplicationFiled: May 2, 2021Publication date: August 19, 2021Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20210019399Abstract: Embodiments of the present invention concern detecting Test-Time Evasion (TTE) attacks on neural network, particularly deep neural network (DNN), classifiers. The manner of detection is similar to that used to detect backdoors of a classifier whose training dataset was poisoned. Given knowledge of the classifier itself, the adversary subtly (even imperceptibly) perturbs their input to the classifier at test time in order to cause the class decision to change from a source class to a target class. For example, an image of a person who is unauthorized to access a resource can be modified slightly so that the classifier decides the image is that of an authorized person. The detector is based on employing a method (similar to that used to detect backdoors in DNNs) to discover different such minimal perturbations for each in a set of clean (correctly classified) samples, to change the sample's ground-truth (source) class to every other (target) class.Type: ApplicationFiled: September 25, 2020Publication date: January 21, 2021Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20200387608Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: ApplicationFiled: August 25, 2020Publication date: December 10, 2020Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20200380118Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: ApplicationFiled: May 27, 2020Publication date: December 3, 2020Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 10846308Abstract: This patent concerns novel technology for detection of zero-day data classes for domains with high-dimensional mixed continuous/discrete feature spaces, including Internet traffic. Assume there is a known-class database available for learning a null hypothesis that a given new batch of unlabeled data does not contain any data from unknown/anomalous classes. A novel and effective generalization of previous parsimonious mixture and topic modeling methods is developed. The novel unsupervised anomaly detector (AD) acts on a new unlabeled batch of data to either identify the statistically significant anomalous classes latently present therein or reject the alternative hypothesis that the new batch contains any anomalous classes. The present AD invention can be applied in an on-line setting. Labeling (by a human expert or by other means) of anomalous clusters provides new supervised data that can be used to adapt an actively learned classifier whose objective is to discriminate all the classes.Type: GrantFiled: July 26, 2017Date of Patent: November 24, 2020Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20190188212Abstract: This patent concerns novel technology for detection of zero-day data classes for domains with high-dimensional mixed continuous/discrete feature spaces, including Internet traffic. Assume there is a known-class database available for learning a null hypothesis that a given new batch of unlabeled data does not contain any data from unknown/anomalous classes. A novel and effective generalization of previous parsimonious mixture and topic modeling methods is developed. The novel unsupervised anomaly detector (AD) acts on a new unlabeled batch of data to either identify the statistically significant anomalous classes latently present therein or reject the alternative hypothesis that the new batch contains any anomalous classes. The present AD invention can be applied in an on-line setting. Labeling (by a human expert or by other means) of anomalous clusters provides new supervised data that can be used to adapt an actively learned classifier whose objective is to discriminate all the classes.Type: ApplicationFiled: July 26, 2017Publication date: June 20, 2019Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 6305180Abstract: A system for cooling electronic equipment, especially modular electronic units mounted in racks, each having a casing with an air inlet, an air outlet and a fan for cooling the contents of the unit. The system uses a chiller unit, between adjacent racks for returning cooled air to ambient, each of which can be a piping array connected to a supply of chilled water. A baffle backing the piping array improves heat exchange and provides a more uniform transfer of heat to each array, thereby enabling optimum temperature differences to be achieved across each rack. A slidable framework supports the piping array and defines an air space on the cooled air side of the array. An auxiliary piping array extends rearwardly of racks so as to communicate with rear air spaces behind the racks and thereby cool heated air flowing in the rear air spaces.Type: GrantFiled: September 12, 2000Date of Patent: October 23, 2001Assignees: British Broadcasting Corporation, Troy (UK) LimitedInventors: David Jonathan Miller, Ian James Sams, Michael James Holland, Simon David Fields