Patents by Inventor Ian Michael Molloy
Ian Michael Molloy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12050993Abstract: Mechanisms are provided for obfuscating a trained configuration of a trained machine learning model. A trained machine learning model processes input data to generate an initial output vector having classification values for each of the plurality of predefined classes. A perturbation insertion engine determines a subset of classification values in the initial output vector into which to insert perturbations. A perturbation insertion engine modifies classification values in the subset of classification values by inserting a perturbation in a function associated with generating the output vector for the classification values in the subset of classification values, to thereby generate a modified output vector. The trained machine learning model outputs the modified output vector. The perturbation modifies the subset of classification values to obfuscate the trained configuration of the trained machine learning model while maintaining accuracy of classification of the input data.Type: GrantFiled: December 8, 2020Date of Patent: July 30, 2024Assignee: International Business Machines CorporationInventors: Taesung Lee, Ian Michael Molloy
-
Patent number: 12045713Abstract: A method, apparatus and computer program product to protect a deep neural network (DNN) having a plurality of layers including one or more intermediate layers. In this approach, a training data set is received. During training of the DNN using the received training data set, a representation of activations associated with an intermediate layer is recorded. For at least one or more of the representations, a separate classifier (model) is trained. The classifiers, collectively, are used to train an outlier detection model. Following training, the outliner detection model is used to detect an adversarial input on the deep neural network. The outlier detection model generates a prediction, and an indicator whether a given input is the adversarial input. According to a further aspect, an action is taken to protect a deployed system associated with the DNN in response to detection of the adversary input.Type: GrantFiled: November 17, 2020Date of Patent: July 23, 2024Assignee: International Business Machines CorporationInventors: Jialong Zhang, Zhongshu Gu, Jiyong Jang, Marc Philippe Stoecklin, Ian Michael Molloy
-
Patent number: 12019720Abstract: A behavioral biometrics deep learning (BBDL) pipeline is provided, comprising a plurality of stages of machine learning computer models that operate to provide a behavioral biometric based authenticator operating based on spatiotemporal input data. The BBDL pipeline receives spatiotemporal input data over a plurality of time intervals, each time interval having a corresponding subset of the spatiotemporal input data. For each time interval, machine learning computer model(s) of a corresponding stage process a subset of the spatiotemporal input data corresponding to the time interval to generate an output vector having values indicative of an internal representation of spatiotemporal traits of the entity. Output vectors are accumulated across the plurality of stages of the BBDL pipeline to generate a final output vector indicative of the spatiotemporal traits of the entity represented in the spatiotemporal input data. The entity is authenticated based on the final output vector.Type: GrantFiled: December 16, 2020Date of Patent: June 25, 2024Assignee: International Business Machines CorporationInventors: Taesung Lee, Ian Michael Molloy, Youngja Park
-
Patent number: 12019747Abstract: One or more computer processors determine a tolerance value, and a norm value associated with an untrusted model and an adversarial training method. The one or more computer processors generate a plurality of interpolated adversarial images ranging between a pair of images utilizing the adversarial training method, wherein each image in the pair of images is from a different class. The one or more computer processors detect a backdoor associated with the untrusted model utilizing the generated plurality of interpolated adversarial images. The one or more computer processors harden the untrusted model by training the untrusted model with the generated plurality of interpolated adversarial images.Type: GrantFiled: October 13, 2020Date of Patent: June 25, 2024Assignee: International Business Machines CorporationInventors: Heiko H. Ludwig, Ebube Chuba, Bryant Chen, Benjamin James Edwards, Taesung Lee, Ian Michael Molloy
-
Patent number: 11977625Abstract: A method, apparatus and computer program product to defend learning models that are vulnerable to adversarial example attack. It is assumed that data (a “dataset”) is available in multiple modalities (e.g., text and images, audio and images in video, etc.). The defense approach herein is premised on the recognition that the correlations between the different modalities for the same entity can be exploited to defend against such attacks, as it is not realistic for an adversary to attack multiple modalities. To this end, according to this technique, adversarial samples are identified and rejected if the features from one (the attacked) modality are determined to be sufficiently far away from those of another un-attacked modality for the same entity. In other words, the approach herein leverages the consistency between multiple modalities in the data to defend against adversarial attacks on one modality.Type: GrantFiled: May 12, 2023Date of Patent: May 7, 2024Assignee: International Business Machines CorporationInventors: Ian Michael Molloy, Youngja Park, Taesung Lee, Wenjie Wang
-
Patent number: 11886989Abstract: Using a deep learning inference system, respective similarities are measured for each of a set of intermediate representations to input information used as an input to the deep learning inference system. The deep learning inference system includes multiple layers, each layer producing one or more associated intermediate representations. Selection is made of a subset of the set of intermediate representations that are most similar to the input information. Using the selected subset of intermediate representations, a partitioning point is determined in the multiple layers used to partition the multiple layers into two partitions defined so that information leakage for the two partitions will meet a privacy parameter when a first of the two partitions is prevented from leaking information. The partitioning point is output for use in partitioning the multiple layers of the deep learning inference system into the two partitions.Type: GrantFiled: September 10, 2018Date of Patent: January 30, 2024Assignee: International Business Machines CorporationInventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian Michael Molloy
-
Publication number: 20230418859Abstract: A method, computer system, and a computer program product for data processing, comprising obtaining a plurality of files from a data source. These files are analyzed the files for information about the content and in order to determine structural information of each file. Once the files have been analyzed, information in each file may be sorted and categorized by common content. Sensitive information may also be extracted and categorized separately. Information may then be then merged using the categories to create a single unified file.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Inventors: Youngja Park, MOHAMMED FAHD ALHAMID, Stefano Braghin, Jing Xin Duan, Mokhtar Kandil, Michael Vu Le, Killian Levacher, Micha Gideon Moffie, Ian Michael Molloy, Walid Rjaibi, ARIEL FARKASH
-
Patent number: 11847555Abstract: A neural network is augmented to enhance robustness against adversarial attack. In this approach, a fully-connected additional layer is associated with a last layer of the neural network. The additional layer has a lower dimensionality than at least one or more intermediate layers. After sizing the additional layer appropriately, a vector bit encoding is applied. The encoding comprises an encoding vector for each output class. Preferably, the encoding is an n-hot encoding, wherein n represents a hyperparameter. The resulting neural network is then trained to encourage the network to associated features with each of the hot positions. In this manner, the network learns a reduced feature set representing those features that contain a high amount of information with respect to each output class, and/or to learn constraints between those features and the output classes. The trained neural network is used to perform a classification that is robust against adversarial examples.Type: GrantFiled: December 4, 2020Date of Patent: December 19, 2023Assignee: International Business Machines CorporationInventors: Kevin Eykholt, Taesung Lee, Ian Michael Molloy, Jiyong Jang
-
Patent number: 11783025Abstract: Mechanisms are provided to implement a hardened ensemble artificial intelligence (AI) model generator. The hardened ensemble AI model generator co-trains at least two AI models. The hardened ensemble AI model generator modifies, based on a comparison of the at least two AI models, a loss surface of one or more of the at least two AI models to prevent an adversarial attack on one AI model, in the at least two AI models, transferring to another AI model in the at least two AI models, to thereby generate one or more modified AI models. At least one of the one or more modified AI models then processes an input to generate an output result.Type: GrantFiled: March 12, 2020Date of Patent: October 10, 2023Assignee: International Business Machines CorporationInventors: Ian Michael Molloy, Taesung Lee, Benjamin James Edwards
-
Publication number: 20230315847Abstract: An approach for detection of malware is disclosed. The approach involves the use of using IR level analysis and embedding of canonical representation on a suspecting sample of software code. The approach can be applied to both malicious and benign software. Specifically, the approach includes converting a binary code to an IR (intermediate representation), canonicalizing the IR into a canonical IR, extracting one or more similarity representation based on the extracted features and comparing the one or more similarity representation to known malware.Type: ApplicationFiled: March 30, 2022Publication date: October 5, 2023Inventors: Dhilung Kirat, Jiyong Jang, Ian Michael Molloy, Josyula R. Rao
-
Publication number: 20230281298Abstract: A method, apparatus and computer program product to defend learning models that are vulnerable to adversarial example attack. It is assumed that data (a “dataset”) is available in multiple modalities (e.g., text and images, audio and images in video, etc.). The defense approach herein is premised on the recognition that the correlations between the different modalities for the same entity can be exploited to defend against such attacks, as it is not realistic for an adversary to attack multiple modalities. To this end, according to this technique, adversarial samples are identified and rejected if the features from one (the attacked) modality are determined to be sufficiently far away from those of another un-attacked modality for the same entity. In other words, the approach herein leverages the consistency between multiple modalities in the data to defend against adversarial attacks on one modality.Type: ApplicationFiled: May 12, 2023Publication date: September 7, 2023Applicant: International Business Machines CorporationInventors: Ian Michael Molloy, Youngja Park, Taesung Lee, Wenjie Wang
-
Patent number: 11748480Abstract: Anomalous control and data flow paths in a program are determined by machine learning the program's normal control flow paths and data flow paths. A subset of those paths also may be determined to involve sensitive data and/or computation. Learning involves collecting events as the program executes, and associating those event with metadata related to the flows. This information is used to train the system about normal paths versus anomalous paths, and sensitive paths versus non-sensitive paths. Training leads to development of a baseline “provenance” graph, which is evaluated to determine “sensitive” control or data flows in the “normal” operation. This process is enhanced by analyzing log data collected during runtime execution of the program against a policy to assign confidence values to the control and data flows. Using these confidence values, anomalous edges and/or paths with respect to the policy are identified to generate a “program execution” provenance graph associated with the policy.Type: GrantFiled: December 22, 2020Date of Patent: September 5, 2023Assignee: Arkose Labs Holdings, Inc.Inventors: Suresh Chari, Ashish Kundu, Ian Michael Molloy, Dimitrios Pendarakis
-
Patent number: 11675896Abstract: A method, apparatus and computer program product to defend learning models that are vulnerable to adversarial example attack. It is assumed that data (a “dataset”) is available in multiple modalities (e.g., text and images, audio and images in video, etc.). The defense approach herein is premised on the recognition that the correlations between the different modalities for the same entity can be exploited to defend against such attacks, as it is not realistic for an adversary to attack multiple modalities. To this end, according to this technique, adversarial samples are identified and rejected if the features from one (the attacked) modality are determined to be sufficiently far away from those of another un-attacked modality for the same entity. In other words, the approach herein leverages the consistency between multiple modalities in the data to defend against adversarial attacks on one modality.Type: GrantFiled: April 9, 2020Date of Patent: June 13, 2023Assignee: International Business Machines CorporationInventors: Ian Michael Molloy, Youngja Park, Taesung Lee, Wenjie Wang
-
Publication number: 20230169176Abstract: A processor-implemented method generates adversarial example objects. One or more processors represent an adversarial input generation process as a graph. The processor(s) explore the graph, such that a sequence of edges on the graph are explored. The processor(s) create, based on the exploring, an adversarial example object, and utilize the created adversarial example object to harden an existing process model against vulnerabilities.Type: ApplicationFiled: November 28, 2021Publication date: June 1, 2023Inventors: TAESUNG LEE, KEVIN EYKHOLT, DOUGLAS LEE SCHALES, JIYONG JANG, IAN MICHAEL MOLLOY
-
Patent number: 11650801Abstract: Multiple execution traces of an application are accessed. The multiple execution traces have been collected at a basic block level. Basic blocks in the multiple execution traces are scored. Scores for the basic blocks represent benefits of performing binary slimming at the corresponding basic blocks. Runtime binary slimming is performed of the application based on the scores of the basic blocks.Type: GrantFiled: November 10, 2021Date of Patent: May 16, 2023Assignee: International Business Machines CorporationInventors: Michael Vu Le, Ian Michael Molloy, Taemin Park
-
Patent number: 11522880Abstract: A method, system, and computer-usable medium for analyzing security data formatted in STIX™ format. Data related to actions performed by one or more users is captured. Individual tasks, such as analytics or extract, transform, load (ETL) tasks related to the captured data is created. Individual tasks are registered to a workflow for executing particular security threat or incident analysis. The workflow is executed and visualized to perform the security threat or incident analysis.Type: GrantFiled: July 9, 2020Date of Patent: December 6, 2022Assignee: International Business Machines CorporationInventors: Sulakshan Vajipayajula, Paul Coccoli, James Brent Peterson, Michael Vu Le, Ian Michael Molloy
-
Patent number: 11494496Abstract: Mechanisms are provided to determine a susceptibility of a trained machine learning model to a cybersecurity threat. The mechanisms execute a trained machine learning model on a test dataset to generate test results output data, and determine an overfit measure of the trained machine learning model based on the generated test results output data. The overfit measure quantifies an amount of overfitting of the trained machine learning model to a specific sub-portion of the test dataset. The mechanisms apply analytics to the overfit measure to determine a susceptibility probability that indicates a likelihood that the trained machine learning model is susceptible to a cybersecurity threat based on the determined amount of overfitting of the trained machine learning model. The mechanisms perform a corrective action based on the determined susceptibility probability.Type: GrantFiled: March 30, 2020Date of Patent: November 8, 2022Assignee: International Business Machines CorporationInventors: Kathrin Grosse, Taesung Lee, Youngja Park, Ian Michael Molloy
-
Patent number: 11455569Abstract: Handshake protocol layer features are extracted from training data associated with encrypted network traffic of a plurality of classified devices. Record protocol layer features are extracted from the training data. One or more models are trained based on the extracted handshake protocol layer features and the extracted record protocol layer features. The one or more models are applied to an observed encrypted network traffic stream associated with a device to determine a predicted device classification of the device.Type: GrantFiled: January 9, 2019Date of Patent: September 27, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Enriquillo Valdez, Pau-Chen Cheng, Ian Michael Molloy, Dimitrios Pendarakis
-
Publication number: 20220188390Abstract: A behavioral biometrics deep learning (BBDL) pipeline is provided, comprising a plurality of stages of machine learning computer models that operate to provide a behavioral biometric based authenticator operating based on spatiotemporal input data. The BBDL pipeline receives spatiotemporal input data over a plurality of time intervals, each time interval having a corresponding subset of the spatiotemporal input data. For each time interval, machine learning computer model(s) of a corresponding stage process a subset of the spatiotemporal input data corresponding to the time interval to generate an output vector having values indicative of an internal representation of spatiotemporal traits of the entity. Output vectors are accumulated across the plurality of stages of the BBDL pipeline to generate a final output vector indicative of the spatiotemporal traits of the entity represented in the spatiotemporal input data. The entity is authenticated based on the final output vector.Type: ApplicationFiled: December 16, 2020Publication date: June 16, 2022Inventors: Taesung Lee, Ian Michael Molloy, Youngja Park
-
Publication number: 20220180157Abstract: A neural network is augmented to enhance robustness against adversarial attack. In this approach, a fully-connected additional layer is associated with a last layer of the neural network. The additional layer has a lower dimensionality than at least one or more intermediate layers. After sizing the additional layer appropriately, a vector bit encoding is applied. The encoding comprises an encoding vector for each output class. Preferably, the encoding is an n-hot encoding, wherein n represents a hyperparameter. The resulting neural network is then trained to encourage the network to associated features with each of the hot positions. In this manner, the network learns a reduced feature set representing those features that contain a high amount of information with respect to each output class, and/or to learn constraints between those features and the output classes. The trained neural network is used to perform a classification that is robust against adversarial examples.Type: ApplicationFiled: December 4, 2020Publication date: June 9, 2022Applicant: International Business Machines CorporationInventors: Kevin Eykholt, Taesung Lee, Ian Michael Molloy, Jiyong Jang