Patents by Inventor George Kesidis

George Kesidis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11704409
    Abstract: This patent concerns novel technology for detecting backdoors in neural network, particularly deep neural network (DNN) classification or prediction/regression models. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to an input sample from a source class of the attack, the backdoor pattern causes the decision of the neural network to change to the attacker's target class in the case of classification, or causes the output of the network to significantly change in the case of prediction or regression. The backdoors under consideration are small in norm so as to be imperceptible to a human or otherwise innocuous/evasive, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are specifically dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier or predictor/regressor may not be available.
    Type: Grant
    Filed: May 2, 2021
    Date of Patent: July 18, 2023
    Assignee: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Patent number: 11609990
    Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: March 21, 2023
    Assignee: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Patent number: 11514297
    Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: November 29, 2022
    Assignee: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Patent number: 11475130
    Abstract: Embodiments of the present invention concern detecting Test-Time Evasion (TTE) attacks on neural network, particularly deep neural network (DNN), classifiers. The manner of detection is similar to that used to detect backdoors of a classifier whose training dataset was poisoned. Given knowledge of the classifier itself, the adversary subtly (even imperceptibly) perturbs their input to the classifier at test time in order to cause the class decision to change from a source class to a target class. For example, an image of a person who is unauthorized to access a resource can be modified slightly so that the classifier decides the image is that of an authorized person. The detector is based on employing a method (similar to that used to detect backdoors in DNNs) to discover different such minimal perturbations for each in a set of clean (correctly classified) samples, to change the sample's ground-truth (source) class to every other (target) class.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: October 18, 2022
    Assignee: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Publication number: 20210256125
    Abstract: This patent concerns novel technology for detecting backdoors in neural network, particularly deep neural network (DNN) classification or prediction/regression models. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to an input sample from a source class of the attack, the backdoor pattern causes the decision of the neural network to change to the attacker's target class in the case of classification, or causes the output of the network to significantly change in the case of prediction or regression. The backdoors under consideration are small in norm so as to be imperceptible to a human or otherwise innocuous/evasive, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are specifically dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier or predictor/regressor may not be available.
    Type: Application
    Filed: May 2, 2021
    Publication date: August 19, 2021
    Applicant: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Publication number: 20210019399
    Abstract: Embodiments of the present invention concern detecting Test-Time Evasion (TTE) attacks on neural network, particularly deep neural network (DNN), classifiers. The manner of detection is similar to that used to detect backdoors of a classifier whose training dataset was poisoned. Given knowledge of the classifier itself, the adversary subtly (even imperceptibly) perturbs their input to the classifier at test time in order to cause the class decision to change from a source class to a target class. For example, an image of a person who is unauthorized to access a resource can be modified slightly so that the classifier decides the image is that of an authorized person. The detector is based on employing a method (similar to that used to detect backdoors in DNNs) to discover different such minimal perturbations for each in a set of clean (correctly classified) samples, to change the sample's ground-truth (source) class to every other (target) class.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 21, 2021
    Applicant: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Publication number: 20200387608
    Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.
    Type: Application
    Filed: August 25, 2020
    Publication date: December 10, 2020
    Applicant: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Publication number: 20200380118
    Abstract: This patent concerns novel technology for detecting backdoors of neural network, particularly deep neural network (DNN), classifiers. The backdoors are planted by suitably poisoning the training dataset, i.e., a data-poisoning attack. Once added to input samples from a source class (or source classes), the backdoor pattern causes the decision of the neural network to change to a target class. The backdoors under consideration are small in norm so as to be imperceptible to a human, but this does not limit their location, support or manner of incorporation. There may not be components (edges, nodes) of the DNN which are dedicated to achieving the backdoor function. Moreover, the training dataset used to learn the classifier may not be available.
    Type: Application
    Filed: May 27, 2020
    Publication date: December 3, 2020
    Applicant: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Patent number: 10846308
    Abstract: This patent concerns novel technology for detection of zero-day data classes for domains with high-dimensional mixed continuous/discrete feature spaces, including Internet traffic. Assume there is a known-class database available for learning a null hypothesis that a given new batch of unlabeled data does not contain any data from unknown/anomalous classes. A novel and effective generalization of previous parsimonious mixture and topic modeling methods is developed. The novel unsupervised anomaly detector (AD) acts on a new unlabeled batch of data to either identify the statistically significant anomalous classes latently present therein or reject the alternative hypothesis that the new batch contains any anomalous classes. The present AD invention can be applied in an on-line setting. Labeling (by a human expert or by other means) of anomalous clusters provides new supervised data that can be used to adapt an actively learned classifier whose objective is to discriminate all the classes.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: November 24, 2020
    Assignee: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Publication number: 20190188212
    Abstract: This patent concerns novel technology for detection of zero-day data classes for domains with high-dimensional mixed continuous/discrete feature spaces, including Internet traffic. Assume there is a known-class database available for learning a null hypothesis that a given new batch of unlabeled data does not contain any data from unknown/anomalous classes. A novel and effective generalization of previous parsimonious mixture and topic modeling methods is developed. The novel unsupervised anomaly detector (AD) acts on a new unlabeled batch of data to either identify the statistically significant anomalous classes latently present therein or reject the alternative hypothesis that the new batch contains any anomalous classes. The present AD invention can be applied in an on-line setting. Labeling (by a human expert or by other means) of anomalous clusters provides new supervised data that can be used to adapt an actively learned classifier whose objective is to discriminate all the classes.
    Type: Application
    Filed: July 26, 2017
    Publication date: June 20, 2019
    Applicant: Anomalee Inc.
    Inventors: David Jonathan Miller, George Kesidis
  • Patent number: 9038172
    Abstract: Sound, robust methods identify the most suitable, parsimonious set of tests to use with respect to prioritized, sequential anomaly detection in a collected batch of sample data. While the focus is on detecting anomalies in network traffic flows and classifying network traffic flows into application types, the methods are also applicable to other anomaly detection and classification application settings, including detecting email spam, (e.g. credit card) fraud detection, detecting imposters, unusual event detection (for example, in images and video), host-based computer intrusion detection, detection of equipment or complex system failures, as well as of anomalous measurements in scientific experiments.
    Type: Grant
    Filed: May 7, 2012
    Date of Patent: May 19, 2015
    Assignee: The Penn State Research Foundation
    Inventors: David J. Miller, George Kesidis, Jayaram Raghuram
  • Publication number: 20120284791
    Abstract: Sound, robust methods identify the most suitable, parsimonious set of tests to use with respect to prioritized, sequential anomaly detection in a collected batch of sample data. While the focus is on detecting anomalies in network traffic flows and classifying network traffic flows into application types, the methods are also applicable to other anomaly detection and classification application settings, including detecting email spam, (e.g. credit card) fraud detection, detecting imposters, unusual event detection (for example, in images and video), host-based computer intrusion detection, detection of equipment or complex system failures, as well as of anomalous measurements in scientific experiments.
    Type: Application
    Filed: May 7, 2012
    Publication date: November 8, 2012
    Applicant: The Penn State Research Foundation
    Inventors: David J. Miller, George Kesidis, Jayaram Raghuram
  • Patent number: 7752324
    Abstract: To facilitate effective and efficient tracing of packet flows back to a trusted point as near as possible to the source of the flow in question, devices on the border of the trusted region are configured to mark packets with partial address information. Typically, the markings comprise fragments of IP addresses of the border devices in combination with fragment identifiers. By combining a small number of marked packets, victims or other interested parties are able to reconstruct the IP address of each border device that forwarded a particular packet flow into the trusted region, and thereby approximately locate the source(s) of traffic without requiring the assistance of outside network operators. Moreover, traceback can be done in real-time, e.g. while a DDoS attack is on-going, so that the attack can be stopped before the victim suffers serious damage.
    Type: Grant
    Filed: July 11, 2003
    Date of Patent: July 6, 2010
    Assignee: Penn State Research Foundation
    Inventors: Ihab Hamadeh, George Kesidis
  • Publication number: 20060253570
    Abstract: An improved network of sensor nodes can self-organize so as to reduce energy consumption and/or improve coverage of a surveillance field. Nodes within the network may be dynamically activated or deactivated so as to lengthen network lifetime and/or enhance sensor coverage of the surveillance field.
    Type: Application
    Filed: January 24, 2006
    Publication date: November 9, 2006
    Inventors: Pratik Biswas, Yi Zou, Shashi Phoha, Krishnendu Chakrabarty, Parameswaran Ramanathan, George Kesidis, Niveditha Sundaram, Lun Tong
  • Publication number: 20040093521
    Abstract: To facilitate effective and efficient tracing of packet flows back to a trusted point as near as possible to the source of the flow in question, devices on the border of the trusted region are configured to mark packets with partial address information. Typically, the markings comprise fragments of IP addresses of the border devices in combination with fragment identifiers. By combining a small number of marked packets, victims or other interested parties are able to reconstruct the IP address of each border device that forwarded a particular packet flow into the trusted region, and thereby approximately locate the source(s) of traffic without requiring the assistance of outside network operators. Moreover, traceback can be done in real-time, e.g. while a DDoS attack is on-going, so that the attack can be stopped before the victim suffers serious damage.
    Type: Application
    Filed: July 11, 2003
    Publication date: May 13, 2004
    Inventors: Ihab Hamadeh, George Kesidis