Patents by Inventor Ian M. Molloy

Ian M. Molloy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10599837
    Abstract: Detecting malicious user activity is provided. A profile for a user that accesses a set of protected assets is generated based on static information representing an organizational view and associated attributes corresponding to the user and based on dynamic information representing observable actions made by the user. A plurality of analytics is applied on the profile corresponding to the user to generate an aggregate risk score for the user accessing the set of protected assets based on applying the plurality of analytics on the profile of the user. A malicious user activity alert is generated in response to the aggregate risk score for the user accessing the set of protected assets being greater than an alert threshold value. The malicious user activity alert is sent to an analyst for feedback.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: March 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Suresh N. Chari, Ted A. Habeck, Ian M. Molloy, Youngja Park, Josyula R. Rao, Wilfried Teiken
  • Publication number: 20200082270
    Abstract: Deep learning training service framework mechanisms are provided. The mechanisms receive encrypted training datasets for training a deep learning model, execute a FrontNet subnet model of the deep learning model in a trusted execution environment, and execute a BackNet subnet model of the deep learning model external to the trusted execution environment. The mechanisms decrypt, within the trusted execution environment, the encrypted training datasets and train the FrontNet subnet model and BackNet subnet model of the deep learning model based on the decrypted training datasets. The FrontNet subnet model is trained within the trusted execution environment and provides intermediate representations to the BackNet subnet model which is trained external to the trusted execution environment using the intermediate representations. The mechanisms release a trained deep learning model comprising a trained FrontNet subnet model and a trained BackNet subnet model, to the one or more client computing devices.
    Type: Application
    Filed: September 7, 2018
    Publication date: March 12, 2020
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
  • Publication number: 20200082272
    Abstract: Mechanisms are provided for executing a trained deep learning (DL) model. The mechanisms receive, from a trained autoencoder executing on a client computing device, one or more intermediate representation (IR) data structures corresponding to training input data input to the trained autoencoder. The mechanisms train the DL model to generate a correct output based on the IR data structures from the trained autoencoder, to thereby generate a trained DL model. The mechanisms receive, from the trained autoencoder executing on the client computing device, a new IR data structure corresponding to new input data input to the trained autoencoder. The mechanisms input the new IR data structure to the trained DL model executing on the deep learning service computing system, to generate output results for the new IR data structure. The mechanisms generate an output response based on the output results, which is transmitted to the client computing device.
    Type: Application
    Filed: September 11, 2018
    Publication date: March 12, 2020
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Cao Xiao, Tengfei Ma, Dimitrios Pendarakis, Ian M. Molloy
  • Publication number: 20200042699
    Abstract: Detecting malicious user activity is provided. A profile for a user that accesses a set of protected assets is generated based on static information representing an organizational view and associated attributes corresponding to the user and based on dynamic information representing observable actions made by the user. A plurality of analytics is applied on the profile corresponding to the user to generate an aggregate risk score for the user accessing the set of protected assets based on applying the plurality of analytics on the profile of the user. A malicious user activity alert is generated in response to the aggregate risk score for the user accessing the set of protected assets being greater than an alert threshold value. The malicious user activity alert is sent to an analyst for feedback.
    Type: Application
    Filed: October 10, 2019
    Publication date: February 6, 2020
    Inventors: Suresh N. Chari, Ted A. Habeck, Ian M. Molloy, Youngja Park, Josyula R. Rao, Wilfried Teiken
  • Patent number: 10540490
    Abstract: An approach is provided that receives a set of user information pertaining to a user. The received set of information is encoded into a neural network and the neural network is trained using the encoded user information. As an output of the trained neural network, passwords corresponding to the user are generated.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: January 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Suresh N. Chari, Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Youngja Park
  • Patent number: 10535120
    Abstract: Mechanisms are provided to implement an adversarial network framework. Using an adversarial training technique, an image obfuscation engine operating as a generator in the adversarial network framework is trained to determine a privacy protection layer to be applied by the image obfuscation engine to input image data. The image obfuscation engine applies the determined privacy protection layer to an input image captured by an image capture device to generate obfuscated image data. The obfuscated image data is transmitted to a remotely located image recognition service, via at least one data network, for performance of image recognition operations.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: January 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Benjamin J. Edwards, Heqing Huang, Taesung Lee, Ian M. Molloy, Dong Su
  • Publication number: 20190392305
    Abstract: Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.
    Type: Application
    Filed: June 25, 2018
    Publication date: December 26, 2019
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
  • Patent number: 10503911
    Abstract: Generating an attack graph to protect sensitive data objects from attack is provided. The attack graph that includes nodes representing components in a set of components of a regulated service and edges between nodes representing relationships between related components in the set of components is generated based on vulnerability and risk metrics corresponding to each component. A risk score is calculated for each component represented by a node in the attack graph based on sensitivity rank and criticality rank corresponding to each respective component. Risk scores are aggregated for each component along each edge path connecting a node of a particular component to a node of a related component. In response to determining that an aggregated risk score of a component is greater than or equal to a risk threshold, an action is performed to mitigate a risk to sensitive data corresponding to the component posed by an attack.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: December 10, 2019
    Assignee: International Business Machines Corporation
    Inventors: Suresh N. Chari, Ashish Kundu, Ian M. Molloy, Dimitrios Pendarakis, Josyula R. Rao
  • Patent number: 10482265
    Abstract: Log(s) of IT events are accessed in a distributed system that includes a distributed application. The distributed system includes multiple data objects. The distributed application uses, processes, or otherwise accesses one or more of data objects. The IT events concern the distributed application and concern accesses by the distributed application to the data object(s). The IT events are correlated with a selected set of the data objects. Risks are estimated to the selected set of data objects based on the information technology events. Estimating risks uses at least ranks of compliance rules as these rules apply to the data objects in the system, and vulnerability scores of systems corresponding to the set of data objects and information technology events. Information is output that allows a user to determine the estimated risks for the selected set of data objects. Techniques for determining ranks of compliance rules are also disclosed.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: November 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Suresh N. Chari, Ted Habeck, Ashish Kundu, Ian M. Molloy, Dimitrios Pendarakis, Josyula R. Rao, Marc P. Stoecklin
  • Publication number: 20190318099
    Abstract: Mechanisms are provided for evaluating a trained machine learning model to determine whether the machine learning model has a backdoor trigger. The mechanisms process a test dataset to generate output classifications for the test dataset, and generate, for the test dataset, gradient data indicating a degree of change of elements within the test dataset based on the output generated by processing the test dataset. The mechanisms analyze the gradient data to identify a pattern of elements within the test dataset indicative of a backdoor trigger. The mechanisms generate, in response to the analysis identifying the pattern of elements indicative of a backdoor trigger, an output indicating the existence of the backdoor trigger in the trained machine learning model.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Inventors: Wilka Carvalho, Bryant Chen, Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Jialong Zhang
  • Patent number: 10419224
    Abstract: Portions of code in an original application are randomized to generate a randomized version of the original application, wherein the randomizing does not modify expected behavior of the original application. Digital signature(s) are generated that attest to integrity of the randomized version. The digital signature(s) and either the original application or the randomized version are sent to a user device for execution or denial of execution of the randomized version based on the digital signature(s). At the user device, the randomized version is created if not received. The randomized version of the application is verified by the user device using the digital signature(s). The randomized version is executed by the user device in response to the digital signature(s) being verified or not executing the randomized version in response to the digital signature(s) not being verified.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: September 17, 2019
    Assignee: International Business Machines Corporation
    Inventors: Suresh N. Chari, Ian M. Molloy, Wilfried Teiken
  • Patent number: 10375119
    Abstract: Dynamic multi-factor authentication challenge selection is provided. A risk associated with an operation that requires authentication of a user of a client device is determined. A plurality of authentication methods is identified. Each respective authentication method associated with a level of security offsetting the risk and a computing cost associated with a respective authentication method. One or more authentication methods are selected from the plurality of authentication methods according to the risk and to minimize the computing cost associated with authenticating the operation.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: August 6, 2019
    Assignee: International Business Machines Corporation
    Inventors: Hagai Aronowitz, Lawrence Koved, Ian M. Molloy, Bo Zhang
  • Patent number: 10341372
    Abstract: Detecting anomalous user behavior is provided. User activity is logged for a set of users. The user activity is divided into distinct time intervals. For each distinct time interval, logged user activity is converted to a numerical representation of each user's activities for that distinct time interval. A clustering process is used on the numerical representations of user activities to determine which users have similar activity patterns in each distinct time interval. A plurality of peer groups of users is generated based on determining the similar activity patterns in each distinct time interval. Anomalous user behavior is detected based on a user activity change in a respective peer group of users within a distinct time interval.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: July 2, 2019
    Assignee: International Business Machines Corporation
    Inventors: Suresh Chari, Benjamin Edwards, Taesung Lee, Ian M. Molloy
  • Publication number: 20190199731
    Abstract: Jointly discovering user roles and data clusters using both access and side information by performing the following operation: (i) representing a set of users as respective vectors in a user feature space; representing data as respective vectors in a data feature space; (ii) providing a user-data access matrix, in which each row represents a user's access over the data; and (iii) co-clustering the users and data using the user-data matrix to produce a set of co-clusters.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 27, 2019
    Inventors: Youngja Park, Taesung Lee, Ian M. Molloy, Suresh Chari, Benjamin J. Edwards
  • Publication number: 20190188830
    Abstract: Mechanisms are provided to implement an adversarial network framework. Using an adversarial training technique, an image obfuscation engine operating as a generator in the adversarial network framework is trained to determine a privacy protection layer to be applied by the image obfuscation engine to input image data. The image obfuscation engine applies the determined privacy protection layer to an input image captured by an image capture device to generate obfuscated image data. The obfuscated image data is transmitted to a remotely located image recognition service, via at least one data network, for performance of image recognition operations.
    Type: Application
    Filed: December 15, 2017
    Publication date: June 20, 2019
    Inventors: Benjamin J. Edwards, Heqing Huang, Taesung Lee, Ian M. Molloy, Dong Su
  • Publication number: 20190188562
    Abstract: Mechanisms are provided to implement a hardened neural network framework. A data processing system is configured to implement a hardened neural network engine that operates on a neural network to harden the neural network against evasion attacks and generates a hardened neural network. The hardened neural network engine generates a reference training data set based on an original training data set. The neural network processes the original training data set and the reference training data set to generate first and second output data sets. The hardened neural network engine calculates a modified loss function of the neural network, where the modified loss function is a combination of an original loss function associated with the neural network and a function of the first and second output data sets. The hardened neural network engine trains the neural network based on the modified loss function to generate the hardened neural network.
    Type: Application
    Filed: December 15, 2017
    Publication date: June 20, 2019
    Inventors: Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Dong Su
  • Publication number: 20190130110
    Abstract: Mechanisms are provided for providing a hardened neural network. The mechanisms configure the hardened neural network executing in the data processing system to introduce noise in internal feature representations of the hardened neural network. The noise introduced in the internal feature representations diverts gradient computations associated with a loss surface of the hardened neural network. The mechanisms configure the hardened neural network executing in the data processing system to implement a merge layer of nodes that combine outputs of adversarially trained output nodes of the hardened neural network with output nodes of the hardened neural network trained based on the introduced noise. The mechanisms process, by the hardened neural network, input data to generate classification labels for the input data and thereby generate augmented input data which is output to a computing system for processing to perform a computing operation.
    Type: Application
    Filed: November 1, 2017
    Publication date: May 2, 2019
    Inventors: Taesung Lee, Ian M. Molloy, Farhan Tejani
  • Patent number: 10277591
    Abstract: Authenticating a user is provided. A decryption key corresponding to an authentication account of the user of a client device and authentication credential data obtained from the user of the client device is received during authentication. Encrypted authentication credential data corresponding to the user is decrypted using the received decryption key corresponding to the authentication account of the user. The decrypted authentication credential data is compared with the received authentication credential data to authenticate the user of the client device.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: April 30, 2019
    Assignee: International Business Machines Corporation
    Inventors: Lawrence Koved, Ian M. Molloy, Gelareh Taban
  • Publication number: 20190121953
    Abstract: An approach is provided that receives a set of user information pertaining to a user. The received set of information is encoded into a neural network and the neural network is trained using the encoded user information. As an output of the trained neural network, passwords corresponding to the user are generated.
    Type: Application
    Filed: October 25, 2017
    Publication date: April 25, 2019
    Inventors: Suresh N. Chari, Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Youngja Park
  • Publication number: 20190095629
    Abstract: Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.
    Type: Application
    Filed: September 25, 2017
    Publication date: March 28, 2019
    Inventors: Taesung Lee, Ian M. Molloy, Dong Su