Patents by Inventor Sherin M. Mathews

Sherin M. Mathews has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847215
    Abstract: A method for halting malware includes: monitoring plural file system events with a system driver to detect an occurrence of a file system event having a predetermined file type and log event type; triggering a listening engine for file system event stream data of a file associated with the detection of the file system event, the file system event stream data indicating data manipulation associated with the file due to execution of a process; obtaining one or more feature values for each of plural different feature combinations of plural features of the file based on the file system event stream data; inputting one or more feature values into a data analytics model to predict a target label value based on the one or more feature values of the plural different feature combinations and agnostic to the process; and performing a predetermined operation based on the target label value.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: December 19, 2023
    Assignee: McAfee, LLC
    Inventors: Celeste R. Fralick, Jonathan King, Carl D. Woodward, Andrew V. Holtzmann, Kunal Mehta, Sherin M. Mathews
  • Publication number: 20230334906
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to detect deepfake content. An example apparatus to determine whether input media is authentic includes a classifier to generate a first probability based on a first output of a local binary model manager, a second probability based on a second output of a filter model manager, and a third probability based on a third output of an image quality assessor, a score analyzer to obtain the first, second, and third probabilities from the classifier, and in response to obtaining a first result and a second result, generate a score indicative of whether the input media is authentic based on the first result, the second result, the first probability, the second probability, and the third probability.
    Type: Application
    Filed: June 22, 2023
    Publication date: October 19, 2023
    Inventors: Utkarsh Verma, Sherin M. Mathews, Amanda House, Carl Woodward, Celeste Fralick, Jonathan King
  • Patent number: 11790237
    Abstract: Methods, apparatus, systems and articles of manufacture to defend against adversarial machine learning are disclosed. An example apparatus includes memory; computer readable instructions; and processor circuitry to execute the computer readable instructions to: generate a first output indicating a feature that contributed to the generation of a classification by a machine learning model; compare the first output with a second output generated by a server that trained the machine learning model; and flag the machine learning model as corresponding to at least one of model drift or an adversarial attack when first output differs from the second output by more than a threshold.
    Type: Grant
    Filed: January 30, 2023
    Date of Patent: October 17, 2023
    Assignee: McAfee, LLC
    Inventors: Sherin M. Mathews, Celeste R. Fralick
  • Patent number: 11727721
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to detect deepfake content. An example apparatus to determine whether input media is authentic includes a classifier to generate a first probability based on a first output of a local binary model manager, a second probability based on a second output of a filter model manager, and a third probability based on a third output of an image quality assessor, a score analyzer to obtain the first, second, and third probabilities from the classifier, and in response to obtaining a first result and a second result, generate a score indicative of whether the input media is authentic based on the first result, the second result, the first probability, the second probability, and the third probability.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: August 15, 2023
    Assignee: MCAFEE, LLC
    Inventors: Utkarsh Verma, Sherin M Mathews, Amanda House, Carl Woodward, Celeste Fralick, Jonathan King
  • Publication number: 20230186097
    Abstract: Methods, apparatus, systems and articles of manufacture to defend against adversarial machine learning are disclosed. An example apparatus includes memory; computer readable instructions; and processor circuitry to execute the computer readable instructions to: generate a first output indicating a feature that contributed to the generation of a classification by a machine learning model; compare the first output with a second output generated by a server that trained the machine learning model; and flag the machine learning model as corresponding to at least one of model drift or an adversarial attack when first output differs from the second output by more than a threshold.
    Type: Application
    Filed: January 30, 2023
    Publication date: June 15, 2023
    Inventors: Sherin M. Mathews, Celeste R. Fralick
  • Patent number: 11568049
    Abstract: Methods, apparatus, systems and articles of manufacture to defend against adversarial machine learning are disclosed. An example apparatus includes a model trainer to train a classification model based on files with expected classifications; and a model modifier to select a convolution layer of the trained classification model based on an analysis of the convolution layers of the trained classification model; and replace the convolution layer with a tree-based structure to generate a modified classification model.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 31, 2023
    Assignee: McAfee, LLC
    Inventors: Sherin M. Mathews, Celeste R. Fralick
  • Publication number: 20220269922
    Abstract: Methods, apparatus, systems and articles of manufacture to improve deepfake detection with explainability are disclosed. An example apparatus includes a deepfake classification model trainer to train a classification model based on a first portion of a dataset of media with known classification information, the classification model to output a classification for input media from a second portion of the dataset of media with known classification information; an explainability map generator to generate an explainability map based on the output of the classification model; a classification analyzer to compare the classification of the input media from the classification model with a known classification of the input media to determine if a misclassification occurred; and a model modifier to, when the misclassification occurred, modify the classification model based on the explainability map.
    Type: Application
    Filed: February 23, 2021
    Publication date: August 25, 2022
    Inventor: Sherin M. Mathews
  • Publication number: 20210226975
    Abstract: Methods, systems, and media for detecting anomalous network activity are provided. In some embodiments, a method for detecting anomalous network activity is provided, the method comprising: receiving information indicating network activity, wherein the information includes IP addresses corresponding to devices participating in the network activity; generating a graph representing the network activity, wherein each node of the graph indicates an IP address of a device; generating a representation of the graph, wherein the representation of the graph reduces a dimensionality of information indicated in the graph; identifying a plurality of clusters of network activity based on the representation of the graph; determining that at least one cluster corresponds to anomalous network activity; and in response to determining that the at least one cluster corresponds to anomalous network activity, causing a network connection of at least one device included in the at least one cluster to be blocked.
    Type: Application
    Filed: April 6, 2021
    Publication date: July 22, 2021
    Inventors: Sherin M. Mathews, Vaisakh Shaj, Sriranga Seetharamaiah, Carl D. Woodward, Kantheti VVSMB Kumar
  • Publication number: 20210157913
    Abstract: A method for halting malware includes: monitoring plural file system events with a system driver to detect an occurrence of a file system event having a predetermined file type and log event type; triggering a listening engine for file system event stream data of a file associated with the detection of the file system event, the file system event stream data indicating data manipulation associated with the file due to execution of a process; obtaining one or more feature values for each of plural different feature combinations of plural features of the file based on the file system event stream data; inputting one or more feature values into a data analytics model to predict a target label value based on the one or more feature values of the plural different feature combinations and agnostic to the process; and performing a predetermined operation based on the target label value.
    Type: Application
    Filed: December 23, 2020
    Publication date: May 27, 2021
    Inventors: CELESTE R. FRALICK, JONATHAN KING, CARL D. WOODWARD, ANDREW V. HOLTZMANN, KUNAL MEHTA, SHERIN M. MATHEWS
  • Patent number: 11005868
    Abstract: Methods, systems, and media for detecting anomalous network activity are provided. In some embodiments, a method for detecting anomalous network activity is provided, the method comprising: receiving information indicating network activity, wherein the information includes IP addresses corresponding to devices participating in the network activity; generating a graph representing the network activity, wherein each node of the graph indicates an IP address of a device; generating a representation of the graph, wherein the representation of the graph reduces a dimensionality of information indicated in the graph; identifying a plurality of clusters of network activity based on the representation of the graph; determining that at least one cluster corresponds to anomalous network activity; and in response to determining that the at least one cluster corresponds to anomalous network activity, causing a network connection of at least one device included in the at least one cluster to be blocked.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: May 11, 2021
    Assignee: McAfee, LLC
    Inventors: Sherin M. Mathews, Vaisakh Shaj, Sriranga Seetharamaiah, Carl D. Woodward, Kantheti VVSMB Kumar
  • Publication number: 20210097260
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to detect deepfake content. An example apparatus to determine whether input media is authentic includes a classifier to generate a first probability based on a first output of a local binary model manager, a second probability based on a second output of a filter model manager, and a third probability based on a third output of an image quality assessor, a score analyzer to obtain the first, second, and third probabilities from the classifier, and in response to obtaining a first result and a second result, generate a score indicative of whether the input media is authentic based on the first result, the second result, the first probability, the second probability, and the third probability.
    Type: Application
    Filed: September 29, 2020
    Publication date: April 1, 2021
    Inventors: Utkarsh Verma, Sherin M. Mathews, Amanda House, Carl Woodward, Celeste Fralick, Jonathan King
  • Publication number: 20210097176
    Abstract: Methods, apparatus, systems and articles of manufacture to defend against adversarial machine learning are disclosed. An example apparatus includes a model trainer to train a classification model based on files with expected classifications; and a model modifier to select a convolution layer of the trained classification model based on an analysis of the convolution layers of the trained classification model; and replace the convolution layer with a tree-based structure to generate a modified classification model.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Sherin M. Mathews, Celeste R. Fralick
  • Publication number: 20210097382
    Abstract: Methods, apparatus, systems and articles of manufacture to improve deepfake detection with explainability are disclosed. An example apparatus includes a deepfake classification model trainer to train a classification model based on a first portion of a dataset of media with known classification information, the classification model to output a classification for input media from a second portion of the dataset of media with known classification information; an explainability map generator to generate an explainability map based on the output of the classification model; a classification analyzer to compare the classification of the input media from the classification model with a known classification of the input media to determine if a misclassification occurred; and a model modifier to, when the misclassification occurred, modify the classification model based on the explainability map.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Sherin M. Mathews, Shivangee Trivedi, Amanda House, Celeste R. Fralick, Steve Povolny, Steve Grobman
  • Patent number: 10956568
    Abstract: A method for halting malware includes: monitoring plural file system events with a system driver to detect an occurrence of a file system event having a predetermined file type and log event type; triggering a listening engine for file system event stream data of a file associated with the detection of the file system event, the file system event stream data indicating data manipulation associated with the file due to execution of a process; obtaining one or more feature values for each of plural different feature combinations of plural features of the file based on the file system event stream data; inputting one or more feature values into a data analytics model to predict a target label value based on the one or more feature values of the plural different feature combinations and agnostic to the process; and performing a predetermined operation based on the target label value.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: March 23, 2021
    Assignee: Mcafee, LLC
    Inventors: Celeste R. Fralick, Jonathan King, Carl D. Woodward, Andrew V. Holtzmann, Kunal Mehta, Sherin M. Mathews
  • Patent number: 10699358
    Abstract: A hidden information detector for image files extracts N least significant bits from each of a first set of pixels of an image file, wherein N is an integer greater than or equal to 1. The detector then applies a mask to each of the extracted N least significant bits to form a second set of pixel values and determines a first probability as to whether the second set of pixels encodes a hidden image. Responsive to the first probability exceeding a first threshold, the detector determines a second probability as to whether the second set of pixels matches an image encoded in the first set of pixels. Responsive to a determination that the second probability is less than a second threshold, the detector performs a non-image classifier on the second set of pixels.
    Type: Grant
    Filed: February 22, 2018
    Date of Patent: June 30, 2020
    Assignee: MCAFEE, LLC
    Inventors: German Lancioni, Sherin M. Mathews
  • Publication number: 20200099708
    Abstract: Methods, systems, and media for detecting anomalous network activity are provided. In some embodiments, a method for detecting anomalous network activity is provided, the method comprising: receiving information indicating network activity, wherein the information includes IP addresses corresponding to devices participating in the network activity; generating a graph representing the network activity, wherein each node of the graph indicates an IP address of a device; generating a representation of the graph, wherein the representation of the graph reduces a dimensionality of information indicated in the graph; identifying a plurality of clusters of network activity based on the representation of the graph; determining that at least one cluster corresponds to anomalous network activity; and in response to determining that the at least one cluster corresponds to anomalous network activity, causing a network connection of at least one device included in the at least one cluster to be blocked.
    Type: Application
    Filed: September 21, 2018
    Publication date: March 26, 2020
    Inventors: Sherin M. Mathews, Vaisakh Shaj, Sriranga Seetharamaiah, Carl D. Woodward, Kantheti VVSMB Kumar
  • Publication number: 20190259126
    Abstract: A hidden information detector for image files extracts N least significant bits from each of a first set of pixels of an image file, wherein N is an integer greater than or equal to 1. The detector then applies a mask to each of the extracted N least significant bits to form a second set of pixel values and determines a first probability as to whether the second set of pixels encodes a hidden image. Responsive to the first probability exceeding a first threshold, the detector determines a second probability as to whether the second set of pixels matches an image encoded in the first set of pixels. Responsive to a determination that the second probability is less than a second threshold, the detector performs a non-image classifier on the second set of pixels.
    Type: Application
    Filed: February 22, 2018
    Publication date: August 22, 2019
    Inventors: GERMAN LANCIONI, SHERIN M. MATHEWS