Patents by Inventor Jialong Zhang
Jialong Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11301563Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.Type: GrantFiled: March 13, 2019Date of Patent: April 12, 2022Assignee: International Business Machines CorporationInventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
-
Patent number: 11188789Abstract: One embodiment provides a method comprising receiving a training set comprising a plurality of data points, where a neural network is trained as a classifier based on the training set. The method further comprises, for each data point of the training set, classifying the data point with one of a plurality of classification labels using the trained neural network, and recording neuronal activations of a portion of the trained neural network in response to the data point. The method further comprises, for each classification label that a portion of the training set has been classified with, clustering a portion of all recorded neuronal activations that are in response to the portion of the training set, and detecting one or more poisonous data points in the portion of the training set based on the clustering.Type: GrantFiled: August 7, 2018Date of Patent: November 30, 2021Assignee: International Business Machines CorporationInventors: Bryant Chen, Wilka Carvalho, Heiko H. Ludwig, Ian Michael Molloy, Taesung Lee, Jialong Zhang, Benjamin J. Edwards
-
Patent number: 11184374Abstract: An automated method for cyberattack detection and prevention in an endpoint. The technique monitors and protects the endpoint by recording inter-process events, creating an inter-process activity graph based on the recorded inter-process events, matching the inter-process activity (as represented in the activity graph) against known malicious or suspicious behavior (as embodied in a set of one or more pattern graphs), and performing a post-detection operation in response to a match between an inter-process activity and a known malicious or suspicious behavior pattern. Preferably, matching involves matching a subgraph in the activity graph with a known malicious or suspicious behavior pattern as represented in the pattern graph. During this processing, preferably both direct and indirect inter-process activities at the endpoint (or across a set of endpoints) are compared to the known behavior patterns.Type: GrantFiled: October 12, 2018Date of Patent: November 23, 2021Assignee: International Business Machines CorporationInventors: Xiaokui Shu, Zhongshu Gu, Heqing Huang, Marc Philippe Stoecklin, Jialong Zhang
-
Patent number: 11163860Abstract: A framework to accurately and quickly verify the ownership of remotely-deployed deep learning models is provided without affecting model accuracy for normal input data. The approach involves generating a watermark, embedding the watermark in a local deep neural network (DNN) model by learning, namely, by training the local DNN model to learn the watermark and a predefined label associated therewith, and later performing a black-box verification against a remote service that is suspected of executing the DNN model without permission. The predefined label is distinct from a true label for a data item in training data for the model that does not include the watermark. Black-box verification includes simply issuing a query that includes a data item with the watermark, and then determining whether the query returns the predefined label.Type: GrantFiled: June 4, 2018Date of Patent: November 2, 2021Assignee: International Business Machines CorporationInventors: Zhongshu Gu, Heqing Huang, Marc Phillipe Stoecklin, Jialong Zhang
-
Patent number: 11144642Abstract: A computer-implemented method, a computer program product, and a computer system. The computer system installs and configures a virtual imitating resource in the computer system, wherein the virtual imitating resource imitates a set of resources in the computer system. Installing and configuring the virtual imitating resource includes modifying respective values of an installed version of the virtual imitating resource for an environment of the computer system, determining whether the virtual imitating resource is a static imitating resource or a dynamic imitating resource, and comparing a call graph of the evasive malware with patterns of dynamic imitating resources on a database. The computer system returns a response from an appropriate element of the virtual imitating resource, in response to a call from the evasive malware to a real computing resource, return, by the computer system.Type: GrantFiled: November 25, 2019Date of Patent: October 12, 2021Assignee: International Business Machines CorporationInventors: Zhongshu Gu, Heqing Huang, Jiyong Jang, Dhilung Hang Kirat, Xiaokui Shu, Marc P. Stoecklin, Jialong Zhang
-
Patent number: 11132444Abstract: Mechanisms are provided for evaluating a trained machine learning model to determine whether the machine learning model has a backdoor trigger. The mechanisms process a test dataset to generate output classifications for the test dataset, and generate, for the test dataset, gradient data indicating a degree of change of elements within the test dataset based on the output generated by processing the test dataset. The mechanisms analyze the gradient data to identify a pattern of elements within the test dataset indicative of a backdoor trigger. The mechanisms generate, in response to the analysis identifying the pattern of elements indicative of a backdoor trigger, an output indicating the existence of the backdoor trigger in the trained machine learning model.Type: GrantFiled: April 16, 2018Date of Patent: September 28, 2021Assignee: International Business Machines CorporationInventors: Wilka Carvalho, Bryant Chen, Benjamin J. Edwards, Taesung Lee, Ian M. Molloy, Jialong Zhang
-
Publication number: 20210150042Abstract: A neural network is trained using a training data set, resulting in a set of model weights, namely, a matrix X, corresponding to the trained network. The set of model weights is then modified to produce a locked matrix X?, which is generated by applying a key. In one embodiment, the key is a binary matrix {0, 1} that zeros (masks) out certain neurons in the network, thereby protecting the network. In another embodiment, the key comprises a matrix of sign values {?1, +1}. In yet another embodiment, the key comprises a set of real values. Preferably, the key is derived by applying a key derivation function to a secret value. The key is symmetric, such that the key used to protect the model weight matrix X (to generate the locked matrix) is also used to recover that matrix, and thus enable access to the model as it was trained.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Applicant: International Business Machines CorporationInventors: Jialong Zhang, Frederico Araujo, Teryl Taylor, Marc Phillipe Stoecklin, Benjamin James Edwards, Ian Michael Molloy
-
Publication number: 20200293653Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.Type: ApplicationFiled: March 13, 2019Publication date: September 17, 2020Inventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
-
Patent number: 10733292Abstract: Mechanisms are provided for protecting a neural network model against model inversion attacks. The mechanisms generate a decoy dataset comprising decoy data for each class recognized by a neural network model. The mechanisms further configure the neural network model to generate a modified output based on the decoy dataset that directs a gradient of the modified output to the decoy dataset. The neural network model receives and process input data to generate an actual output. The neural network model modifies one or more actual elements of the actual output to be one or more corresponding modified elements of the modified output, and returns the one or more corresponding modified elements, instead of the one or more actual elements, to the source computing device.Type: GrantFiled: July 10, 2018Date of Patent: August 4, 2020Assignee: International Business Machines CorporationInventors: Frederico Araujo, Jialong Zhang, Teryl Taylor, Marc P. Stoecklin
-
Patent number: 10631168Abstract: Advanced persistent threats to a mobile device are detected and prevented by leveraging the built-in mandatory access control (MAC) environment in the mobile operating system in a “stateful” manner. To this end, the MAC mechanism is placed in a permissive mode of operation wherein permission denials are logged but not enforced. The mobile device security environment is augmented to include a monitoring application that is instantiated with system privileges. The application monitors application execution parameters of one or more mobile applications executing on the device. These application execution parameters including, without limitation, the permission denials, are collected and used by the monitoring application to facilitate a stateful monitoring of the operating system security environment. By assembling security-sensitive events over a time period, the system identifies an advanced persistent threat (APT) that otherwise leverages multiple steps using benign components.Type: GrantFiled: March 28, 2018Date of Patent: April 21, 2020Assignee: International Business Machines CorporationInventors: Suresh Chari, Zhongshu Gu, Heqing Huang, Xiaokui Shu, Jialong Zhang
-
Publication number: 20200120118Abstract: An automated method for cyberattack detection and prevention in an endpoint. The technique monitors and protects the endpoint by recording inter-process events, creating an inter-process activity graph based on the recorded inter-process events, matching the inter-process activity (as represented in the activity graph) against known malicious or suspicious behavior (as embodied in a set of one or more pattern graphs), and performing a post-detection operation in response to a match between an inter-process activity and a known malicious or suspicious behavior pattern. Preferably, matching involves matching a subgraph in the activity graph with a known malicious or suspicious behavior pattern as represented in the pattern graph. During this processing, preferably both direct and indirect inter-process activities at the endpoint (or across a set of endpoints) are compared to the known behavior patterns.Type: ApplicationFiled: October 12, 2018Publication date: April 16, 2020Applicant: International Business Machines CorporationInventors: Xiaokui Shu, Zhongshu Gu, Heqing Huang, Marc Philippe Stoecklin, Jialong Zhang
-
Publication number: 20200089879Abstract: A computer-implemented method, a computer program product, and a computer system. The computer system installs and configures a virtual imitating resource in the computer system, wherein the virtual imitating resource imitates a set of resources in the computer system. Installing and configuring the virtual imitating resource includes modifying respective values of an installed version of the virtual imitating resource for an environment of the computer system, determining whether the virtual imitating resource is a static imitating resource or a dynamic imitating resource, and comparing a call graph of the evasive malware with patterns of dynamic imitating resources on a database. The computer system returns a response from an appropriate element of the virtual imitating resource, in response to a call from the evasive malware to a real computing resource, return, by the computer system.Type: ApplicationFiled: November 25, 2019Publication date: March 19, 2020Inventors: ZHONGSHU GU, HEQING HUANG, JIYONG JANG, DHILUNG HANG KIRAT, XIAOKUI SHU, MARC P. STOECKLIN, JIALONG ZHANG
-
Publication number: 20200082270Abstract: Deep learning training service framework mechanisms are provided. The mechanisms receive encrypted training datasets for training a deep learning model, execute a FrontNet subnet model of the deep learning model in a trusted execution environment, and execute a BackNet subnet model of the deep learning model external to the trusted execution environment. The mechanisms decrypt, within the trusted execution environment, the encrypted training datasets and train the FrontNet subnet model and BackNet subnet model of the deep learning model based on the decrypted training datasets. The FrontNet subnet model is trained within the trusted execution environment and provides intermediate representations to the BackNet subnet model which is trained external to the trusted execution environment using the intermediate representations. The mechanisms release a trained deep learning model comprising a trained FrontNet subnet model and a trained BackNet subnet model, to the one or more client computing devices.Type: ApplicationFiled: September 7, 2018Publication date: March 12, 2020Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
-
Publication number: 20200082259Abstract: Using a deep learning inference system, respective similarities are measured for each of a set of intermediate representations to input information used as an input to the deep learning inference system. The deep learning inference system includes multiple layers, each layer producing one or more associated intermediate representations. Selection is made of a subset of the set of intermediate representations that are most similar to the input information. Using the selected subset of intermediate representations, a partitioning point is determined in the multiple layers used to partition the multiple layers into two partitions defined so that information leakage for the two partitions will meet a privacy parameter when a first of the two partitions is prevented from leaking information. The partitioning point is output for use in partitioning the multiple layers of the deep learning inference system into the two partitions.Type: ApplicationFiled: September 10, 2018Publication date: March 12, 2020Inventors: Zhongshu GU, Heqing HUANG, Jialong ZHANG, Dong SU, Dimitrios PENDARAKIS, Ian Michael MOLLOY
-
Publication number: 20200082272Abstract: Mechanisms are provided for executing a trained deep learning (DL) model. The mechanisms receive, from a trained autoencoder executing on a client computing device, one or more intermediate representation (IR) data structures corresponding to training input data input to the trained autoencoder. The mechanisms train the DL model to generate a correct output based on the IR data structures from the trained autoencoder, to thereby generate a trained DL model. The mechanisms receive, from the trained autoencoder executing on the client computing device, a new IR data structure corresponding to new input data input to the trained autoencoder. The mechanisms input the new IR data structure to the trained DL model executing on the deep learning service computing system, to generate output results for the new IR data structure. The mechanisms generate an output response based on the output results, which is transmitted to the client computing device.Type: ApplicationFiled: September 11, 2018Publication date: March 12, 2020Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Cao Xiao, Tengfei Ma, Dimitrios Pendarakis, Ian M. Molloy
-
Publication number: 20200050945Abstract: One embodiment provides a method comprising receiving a training set comprising a plurality of data points, where a neural network is trained as a classifier based on the training set. The method further comprises, for each data point of the training set, classifying the data point with one of a plurality of classification labels using the trained neural network, and recording neuronal activations of a portion of the trained neural network in response to the data point. The method further comprises, for each classification label that a portion of the training set has been classified with, clustering a portion of all recorded neuronal activations that are in response to the portion of the training set, and detecting one or more poisonous data points in the portion of the training set based on the clustering.Type: ApplicationFiled: August 7, 2018Publication date: February 13, 2020Inventors: Bryant Chen, Wilka Carvalho, Heiko H. Ludwig, Ian Michael Molloy, Taesung Lee, Jialong Zhang, Benjamin J. Edwards
-
Patent number: 10546128Abstract: Approaches to deactivating evasive malware. In an approach, a computer system installs an imitating resource in the computer system and the imitating resource creates an imitating environment of malware analysis, wherein the imitating resource causes the evasive malware to respond to the imitating environment of the malware analysis as to a real environment of the malware analysis. In the imitating environment of malware analysis, the evasive malware determines not to perform malicious behavior. In another approach, a computer system intercepts a call from the evasive malware to a resource on the computer system and returns a virtual resource to the call, wherein in the virtual resource one or more values of the resource on the computer system are modified.Type: GrantFiled: October 6, 2017Date of Patent: January 28, 2020Assignee: International Business Machines CorporationInventors: Zhongshu Gu, Heqing Huang, Jiyong Jang, Dhilung Hang Kirat, Xiaokui Shu, Marc P. Stoecklin, Jialong Zhang
-
Publication number: 20200019699Abstract: Mechanisms are provided for protecting a neural network model against model inversion attacks. The mechanisms generate a decoy dataset comprising decoy data for each class recognized by a neural network model. The mechanisms further configure the neural network model to generate a modified output based on the decoy dataset that directs a gradient of the modified output to the decoy dataset. The neural network model receives and process input data to generate an actual output. The neural network model modifies one or more actual elements of the actual output to be one or more corresponding modified elements of the modified output, and returns the one or more corresponding modified elements, instead of the one or more actual elements, to the source computing device.Type: ApplicationFiled: July 10, 2018Publication date: January 16, 2020Inventors: Frederico Araujo, Jialong Zhang, Teryl Taylor, Marc P. Stoecklin
-
Publication number: 20200014686Abstract: There are provided a network identity authentication method, a network identity authentication system, a user agent device used in the network identity authentication method and the network identity authentication system, and a computer-readable storage medium. The network identity authentication method includes: acquiring, by a user agent, identity information and a registration rule of a target website via a network terminal; acquiring registration information for the target website based on the identity information or generating registration information for the target website according to the registration rule; transmitting the identity information and the registration information to a server agent and sending, by the server agent based on the identity information and the registration information, an authentication request to a website server to complete an authentication process.Type: ApplicationFiled: September 25, 2018Publication date: January 9, 2020Applicant: GUANGDONG UNIVERSITY OF TECHNOLOGYInventors: Wenyin LIU, Xin LI, Zhiheng SHEN, Jialong ZHANG, Shuai FAN, Qixiang ZHANG, Jiahong WU
-
Publication number: 20200005133Abstract: Decoy data is generated from regular data. A deep neural network, which has been trained with the regular data, is trained with the decoy data. The trained deep neural network, responsive to a client request comprising input data, is operated on the input data. Post-processing is performed using at least an output of the operated trained deep neural network to determine whether the input data is regular data or decoy data. One or more actions are performed based on a result of the performed post-processing.Type: ApplicationFiled: June 28, 2018Publication date: January 2, 2020Inventors: Jialong Zhang, Frederico Araujo, Teryl Taylor, Marc Philippe Stoecklin