Patents by Inventor Ian M. Molloy
Ian M. Molloy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11853436Abstract: Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.Type: GrantFiled: April 15, 2021Date of Patent: December 26, 2023Assignee: International Business Machines CorporationInventors: Taesung Lee, Ian M. Molloy, Dong Su
-
Patent number: 11816575Abstract: Deep learning training service framework mechanisms are provided. The mechanisms receive encrypted training datasets for training a deep learning model, execute a FrontNet subnet model of the deep learning model in a trusted execution environment, and execute a BackNet subnet model of the deep learning model external to the trusted execution environment. The mechanisms decrypt, within the trusted execution environment, the encrypted training datasets and train the FrontNet subnet model and BackNet subnet model of the deep learning model based on the decrypted training datasets. The FrontNet subnet model is trained within the trusted execution environment and provides intermediate representations to the BackNet subnet model which is trained external to the trusted execution environment using the intermediate representations. The mechanisms release a trained deep learning model comprising a trained FrontNet subnet model and a trained BackNet subnet model, to the one or more client computing devices.Type: GrantFiled: September 7, 2018Date of Patent: November 14, 2023Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
-
Patent number: 11775637Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.Type: GrantFiled: March 14, 2022Date of Patent: October 3, 2023Assignee: International Business Machines CorporationInventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
-
Patent number: 11443182Abstract: Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.Type: GrantFiled: June 25, 2018Date of Patent: September 13, 2022Assignee: International Business Machines CorporationInventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
-
Patent number: 11443178Abstract: Mechanisms are provided to implement a hardened neural network framework. A data processing system is configured to implement a hardened neural network engine that operates on a neural network to harden the neural network against evasion attacks and generates a hardened neural network. The hardened neural network engine generates a reference training data set based on an original training data set. The neural network processes the original training data set and the reference training data set to generate first and second output data sets. The hardened neural network engine calculates a modified loss function of the neural network, where the modified loss function is a combination of an original loss function associated with the neural network and a function of the first and second output data sets. The hardened neural network engine trains the neural network based on the modified loss function to generate the hardened neural network.Type: GrantFiled: December 15, 2017Date of Patent: September 13, 2022Assignee: Interntional Business Machines CorporationInventors: Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Dong Su
-
Publication number: 20220269942Abstract: Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.Type: ApplicationFiled: May 13, 2022Publication date: August 25, 2022Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
-
Patent number: 11416771Abstract: Mechanisms are provided for identifying risky user entitlements in an identity and access management (IAM) computing system. A self-learning peer group analysis (SLPGA) engine receives an IAM data set which specifies user attributes of users of computing resources and entitlements allocated to the users for accessing the computing resources. The SLPGA engine generates a user-entitlement matrix, performs a machine learning matrix decomposition operation on the user-entitlement matrix to identify excessive entitlement allocations, and performs a conditional entropy analysis of the user attributes and entitlements in the IAM data set to identify a set of user attributes for defining peer groups. The SLPGA engine performs a commonality analysis of user attributes and entitlements for each of one or more peer groups defined based on the set of user attributes, and identifies outlier entitlements based on the identification of the excessive entitlement allocations and results of the commonality analysis.Type: GrantFiled: November 11, 2019Date of Patent: August 16, 2022Assignee: International Business Machines CorporationInventors: Priti P. Patil, Kushaal Veijay, Ian M. Molloy
-
Publication number: 20220207137Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.Type: ApplicationFiled: March 14, 2022Publication date: June 30, 2022Inventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
-
Patent number: 11372997Abstract: Automatically generating audit logs is provided. Audit log statement insertion points are identified in components of an application based on a static code analysis identifying start and end operations on sensitive data in the components of the application. The application is instrumented with audit log statements at the audit log statement insertion points in the components of the application. Audit logs of monitored sensitive data activity events in the application are generated using the audit log statements at the audit log statement insertion points in the components of the application.Type: GrantFiled: March 10, 2020Date of Patent: June 28, 2022Assignee: International Business Machines CorporationInventors: Suresh N. Chari, Ted A. Habeck, Ashish Kundu, Ian M. Molloy
-
Patent number: 11301563Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.Type: GrantFiled: March 13, 2019Date of Patent: April 12, 2022Assignee: International Business Machines CorporationInventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
-
Publication number: 20210303703Abstract: Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.Type: ApplicationFiled: April 15, 2021Publication date: September 30, 2021Inventors: Taesung Lee, Ian M. Molloy, Dong Su
-
Patent number: 11132444Abstract: Mechanisms are provided for evaluating a trained machine learning model to determine whether the machine learning model has a backdoor trigger. The mechanisms process a test dataset to generate output classifications for the test dataset, and generate, for the test dataset, gradient data indicating a degree of change of elements within the test dataset based on the output generated by processing the test dataset. The mechanisms analyze the gradient data to identify a pattern of elements within the test dataset indicative of a backdoor trigger. The mechanisms generate, in response to the analysis identifying the pattern of elements indicative of a backdoor trigger, an output indicating the existence of the backdoor trigger in the trained machine learning model.Type: GrantFiled: April 16, 2018Date of Patent: September 28, 2021Assignee: International Business Machines CorporationInventors: Wilka Carvalho, Bryant Chen, Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Jialong Zhang
-
Patent number: 11023593Abstract: Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.Type: GrantFiled: September 25, 2017Date of Patent: June 1, 2021Assignee: International Business Machines CorporationInventors: Taesung Lee, Ian M. Molloy, Dong Su
-
Publication number: 20210142209Abstract: Mechanisms are provided for identifying risky user entitlements in an identity and access management (IAM) computing system. A self-learning peer group analysis (SLPGA) engine receives an IAM data set which specifies user attributes of users of computing resources and entitlements allocated to the users for accessing the computing resources. The SLPGA engine generates a user-entitlement matrix, performs a machine learning matrix decomposition operation on the user-entitlement matrix to identify excessive entitlement allocations, and performs a conditional entropy analysis of the user attributes and entitlements in the IAM data set to identify a set of user attributes for defining peer groups. The SLPGA engine performs a commonality analysis of user attributes and entitlements for each of one or more peer groups defined based on the set of user attributes, and identifies outlier entitlements based on the identification of the excessive entitlement allocations and results of the commonality analysis.Type: ApplicationFiled: November 11, 2019Publication date: May 13, 2021Inventors: Priti P. Patil, Kushaal Veijay, Ian M. Molloy
-
Patent number: 10891371Abstract: Detecting malicious user activity is provided. A profile for a user that accesses a set of protected assets is generated based on static information representing an organizational view and associated attributes corresponding to the user and based on dynamic information representing observable actions made by the user. A plurality of analytics is applied on the profile corresponding to the user to generate an aggregate risk score for the user accessing the set of protected assets based on applying the plurality of analytics on the profile of the user. A malicious user activity alert is generated in response to the aggregate risk score for the user accessing the set of protected assets being greater than an alert threshold value. The malicious user activity alert is sent to an analyst for feedback.Type: GrantFiled: October 10, 2019Date of Patent: January 12, 2021Assignee: International Business Machines CorporationInventors: Suresh N. Chari, Ted A. Habeck, Ian M. Molloy, Youngja Park, Josyula R. Rao, Wilfried Teiken
-
Patent number: 10805308Abstract: Jointly discovering user roles and data clusters using both access and side information by performing the following operation: (i) representing a set of users as respective vectors in a user feature space; representing data as respective vectors in a data feature space; (ii) providing a user-data access matrix, in which each row represents a user's access over the data; and (iii) co-clustering the users and data using the user-data matrix to produce a set of co-clusters.Type: GrantFiled: December 22, 2017Date of Patent: October 13, 2020Assignee: International Business Machines CorporationInventors: Youngja Park, Taesung Lee, Ian M. Molloy, Suresh Chari, Benjamin J. Edwards
-
Publication number: 20200293653Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.Type: ApplicationFiled: March 13, 2019Publication date: September 17, 2020Inventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
-
Publication number: 20200210609Abstract: Automatically generating audit logs is provided. Audit log statement insertion points are identified in components of an application based on a static code analysis identifying start and end operations on sensitive data in the components of the application. The application is instrumented with audit log statements at the audit log statement insertion points in the components of the application. Audit logs of monitored sensitive data activity events in the application are generated using the audit log statements at the audit log statement insertion points in the components of the application.Type: ApplicationFiled: March 10, 2020Publication date: July 2, 2020Inventors: Suresh N. Chari, Ted A. Habeck, Ashish Kundu, Ian M. Molloy
-
Patent number: 10657259Abstract: Mechanisms are provided for providing a hardened neural network. The mechanisms configure the hardened neural network executing in the data processing system to introduce noise in internal feature representations of the hardened neural network. The noise introduced in the internal feature representations diverts gradient computations associated with a loss surface of the hardened neural network. The mechanisms configure the hardened neural network executing in the data processing system to implement a merge layer of nodes that combine outputs of adversarially trained output nodes of the hardened neural network with output nodes of the hardened neural network trained based on the introduced noise. The mechanisms process, by the hardened neural network, input data to generate classification labels for the input data and thereby generate augmented input data which is output to a computing system for processing to perform a computing operation.Type: GrantFiled: November 1, 2017Date of Patent: May 19, 2020Assignee: International Business Machines CorporationInventors: Taesung Lee, Ian M. Molloy, Farhan Tejani
-
Patent number: 10628600Abstract: Automatically generating audit logs is provided. Audit log statement insertion points are identified in software components of an application based on a static code analysis identifying start and end operations on sensitive data in the software components of the application. The application is instrumented with audit log statements at the audit log statement insertion points in the software components of the application. Audit logs of monitored sensitive data activity events in the application are generated using the audit log statements at the audit log statement insertion points in the software components of the application. A dynamic code analysis is performed on the application during execution of the application to prevent executing source code of the application from recording in the audit logs the sensitive data processed by the application.Type: GrantFiled: March 6, 2018Date of Patent: April 21, 2020Assignee: International Business Machines CorporationInventors: Suresh N. Chari, Ted A. Habeck, Ashish Kundu, Ian M. Molloy