Patents by Inventor George Kesidis
George Kesidis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11704409Abstract: This patent concerns novel technology for detecting backdoors in neural network, particularly deep neural network (DNN) classification or prediction/regression models. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to an input sample from a source class of the attack, the backdoor pattern causes the decision of the neural network to change to the attacker's target class in the case of classification, or causes the output of the network to significantly change in the case of prediction or regression. The backdoors under consideration are small in norm so as to be imperceptible to a human or otherwise innocuous/evasive, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are specifically dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier or predictor/regressor may not be available.Type: GrantFiled: May 2, 2021Date of Patent: July 18, 2023Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 11609990Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: GrantFiled: August 25, 2020Date of Patent: March 21, 2023Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 11514297Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: GrantFiled: May 27, 2020Date of Patent: November 29, 2022Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 11475130Abstract: Embodiments of the present invention concern detecting Test-Time Evasion (TTE) attacks on neural network, particularly deep neural network (DNN), classifiers. The manner of detection is similar to that used to detect backdoors of a classifier whose training dataset was poisoned. Given knowledge of the classifier itself, the adversary subtly (even imperceptibly) perturbs their input to the classifier at test time in order to cause the class decision to change from a source class to a target class. For example, an image of a person who is unauthorized to access a resource can be modified slightly so that the classifier decides the image is that of an authorized person. The detector is based on employing a method (similar to that used to detect backdoors in DNNs) to discover different such minimal perturbations for each in a set of clean (correctly classified) samples, to change the sample's ground-truth (source) class to every other (target) class.Type: GrantFiled: September 25, 2020Date of Patent: October 18, 2022Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20210256125Abstract: This patent concerns novel technology for detecting backdoors in neural network, particularly deep neural network (DNN) classification or prediction/regression models. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to an input sample from a source class of the attack, the backdoor pattern causes the decision of the neural network to change to the attacker's target class in the case of classification, or causes the output of the network to significantly change in the case of prediction or regression. The backdoors under consideration are small in norm so as to be imperceptible to a human or otherwise innocuous/evasive, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are specifically dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier or predictor/regressor may not be available.Type: ApplicationFiled: May 2, 2021Publication date: August 19, 2021Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20210019399Abstract: Embodiments of the present invention concern detecting Test-Time Evasion (TTE) attacks on neural network, particularly deep neural network (DNN), classifiers. The manner of detection is similar to that used to detect backdoors of a classifier whose training dataset was poisoned. Given knowledge of the classifier itself, the adversary subtly (even imperceptibly) perturbs their input to the classifier at test time in order to cause the class decision to change from a source class to a target class. For example, an image of a person who is unauthorized to access a resource can be modified slightly so that the classifier decides the image is that of an authorized person. The detector is based on employing a method (similar to that used to detect backdoors in DNNs) to discover different such minimal perturbations for each in a set of clean (correctly classified) samples, to change the sample's ground-truth (source) class to every other (target) class.Type: ApplicationFiled: September 25, 2020Publication date: January 21, 2021Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20200387608Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: ApplicationFiled: August 25, 2020Publication date: December 10, 2020Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20200380118Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.Type: ApplicationFiled: May 27, 2020Publication date: December 3, 2020Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 10846308Abstract: This patent concerns novel technology for detection of zero-day data classes for domains with high-dimensional mixed continuous/discrete feature spaces, including Internet traffic. Assume there is a known-class database available for learning a null hypothesis that a given new batch of unlabeled data does not contain any data from unknown/anomalous classes. A novel and effective generalization of previous parsimonious mixture and topic modeling methods is developed. The novel unsupervised anomaly detector (AD) acts on a new unlabeled batch of data to either identify the statistically significant anomalous classes latently present therein or reject the alternative hypothesis that the new batch contains any anomalous classes. The present AD invention can be applied in an on-line setting. Labeling (by a human expert or by other means) of anomalous clusters provides new supervised data that can be used to adapt an actively learned classifier whose objective is to discriminate all the classes.Type: GrantFiled: July 26, 2017Date of Patent: November 24, 2020Assignee: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Publication number: 20190188212Abstract: This patent concerns novel technology for detection of zero-day data classes for domains with high-dimensional mixed continuous/discrete feature spaces, including Internet traffic. Assume there is a known-class database available for learning a null hypothesis that a given new batch of unlabeled data does not contain any data from unknown/anomalous classes. A novel and effective generalization of previous parsimonious mixture and topic modeling methods is developed. The novel unsupervised anomaly detector (AD) acts on a new unlabeled batch of data to either identify the statistically significant anomalous classes latently present therein or reject the alternative hypothesis that the new batch contains any anomalous classes. The present AD invention can be applied in an on-line setting. Labeling (by a human expert or by other means) of anomalous clusters provides new supervised data that can be used to adapt an actively learned classifier whose objective is to discriminate all the classes.Type: ApplicationFiled: July 26, 2017Publication date: June 20, 2019Applicant: Anomalee Inc.Inventors: David Jonathan Miller, George Kesidis
-
Patent number: 9038172Abstract: Sound, robust methods identify the most suitable, parsimonious set of tests to use with respect to prioritized, sequential anomaly detection in a collected batch of sample data. While the focus is on detecting anomalies in network traffic flows and classifying network traffic flows into application types, the methods are also applicable to other anomaly detection and classification application settings, including detecting email spam, (e.g. credit card) fraud detection, detecting imposters, unusual event detection (for example, in images and video), host-based computer intrusion detection, detection of equipment or complex system failures, as well as of anomalous measurements in scientific experiments.Type: GrantFiled: May 7, 2012Date of Patent: May 19, 2015Assignee: The Penn State Research FoundationInventors: David J. Miller, George Kesidis, Jayaram Raghuram
-
Publication number: 20120284791Abstract: Sound, robust methods identify the most suitable, parsimonious set of tests to use with respect to prioritized, sequential anomaly detection in a collected batch of sample data. While the focus is on detecting anomalies in network traffic flows and classifying network traffic flows into application types, the methods are also applicable to other anomaly detection and classification application settings, including detecting email spam, (e.g. credit card) fraud detection, detecting imposters, unusual event detection (for example, in images and video), host-based computer intrusion detection, detection of equipment or complex system failures, as well as of anomalous measurements in scientific experiments.Type: ApplicationFiled: May 7, 2012Publication date: November 8, 2012Applicant: The Penn State Research FoundationInventors: David J. Miller, George Kesidis, Jayaram Raghuram
-
Patent number: 7752324Abstract: To facilitate effective and efficient tracing of packet flows back to a trusted point as near as possible to the source of the flow in question, devices on the border of the trusted region are configured to mark packets with partial address information. Typically, the markings comprise fragments of IP addresses of the border devices in combination with fragment identifiers. By combining a small number of marked packets, victims or other interested parties are able to reconstruct the IP address of each border device that forwarded a particular packet flow into the trusted region, and thereby approximately locate the source(s) of traffic without requiring the assistance of outside network operators. Moreover, traceback can be done in real-time, e.g. while a DDoS attack is on-going, so that the attack can be stopped before the victim suffers serious damage.Type: GrantFiled: July 11, 2003Date of Patent: July 6, 2010Assignee: Penn State Research FoundationInventors: Ihab Hamadeh, George Kesidis
-
Publication number: 20060253570Abstract: An improved network of sensor nodes can self-organize so as to reduce energy consumption and/or improve coverage of a surveillance field. Nodes within the network may be dynamically activated or deactivated so as to lengthen network lifetime and/or enhance sensor coverage of the surveillance field.Type: ApplicationFiled: January 24, 2006Publication date: November 9, 2006Inventors: Pratik Biswas, Yi Zou, Shashi Phoha, Krishnendu Chakrabarty, Parameswaran Ramanathan, George Kesidis, Niveditha Sundaram, Lun Tong
-
Publication number: 20040093521Abstract: To facilitate effective and efficient tracing of packet flows back to a trusted point as near as possible to the source of the flow in question, devices on the border of the trusted region are configured to mark packets with partial address information. Typically, the markings comprise fragments of IP addresses of the border devices in combination with fragment identifiers. By combining a small number of marked packets, victims or other interested parties are able to reconstruct the IP address of each border device that forwarded a particular packet flow into the trusted region, and thereby approximately locate the source(s) of traffic without requiring the assistance of outside network operators. Moreover, traceback can be done in real-time, e.g. while a DDoS attack is on-going, so that the attack can be stopped before the victim suffers serious damage.Type: ApplicationFiled: July 11, 2003Publication date: May 13, 2004Inventors: Ihab Hamadeh, George Kesidis