Patents by Inventor Zhongshu Gu

Zhongshu Gu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11544527
    Abstract: Mechanisms for identifying a pattern of computing resource activity of interest, in activity data characterizing activities of computer system elements, are provided. A temporal graph of the activity data is generated and a filter is applied to the temporal graph to generate one or more first vector representations, each characterizing nodes and edges within a moving window defined by the filter. The filter is applied to a pattern graph representing a pattern of entities and events indicative of the pattern of interest, to generate a second vector representation. The second vector representation is compared to the one or more first vector representations to identify one or more nearby vectors, and one or more corresponding subgraph instances are output to an intelligence console computing system as inexact matches of the temporal graph.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: January 3, 2023
    Assignee: International Business Machines Corporation
    Inventors: Xiaokui Shu, Zhongshu Gu, Marc P. Stoecklin, Hani T. Jamjoom
  • Publication number: 20220374762
    Abstract: Techniques for distributed federated learning leverage a multi-layered defense strategy to provide for reduced information leakage. In lieu of aggregating model updates centrally, an aggregation function is decentralized into multiple independent and functionally-equivalent execution entities, each running within its own trusted executed environment (TEE). The TEEs enable confidential and remote-attestable federated aggregation. Preferably, each aggregator entity runs within an encrypted virtual machine that support runtime in-memory encryption. Each party remotely authenticates the TEE before participating in the training. By using multiple decentralized aggregators, parties are enabled to partition their respective model updates at model-parameter granularity, and can map single weights to a specific aggregator entity. Parties also can dynamically shuffle fragmentary model updates at each training iteration to further obfuscate the information dispatched to each aggregator execution entity.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Applicant: International Business Machines Corporation
    Inventors: Jayaram Kallapalayam Radhakrishnan, Ashish Verma, Zhongshu Gu, Enriquillo Valdez, Pau-Chen Cheng, Hani Talal Jamjoom
  • Publication number: 20220374763
    Abstract: Techniques for distributed federated learning leverage a multi-layered defense strategy to provide for reduced information leakage. In lieu of aggregating model updates centrally, an aggregation function is decentralized into multiple independent and functionally-equivalent execution entities, each running within its own trusted executed environment (TEE). The TEEs enable confidential and remote-attestable federated aggregation. Preferably, each aggregator entity runs within an encrypted virtual machine that support runtime in-memory encryption. Each party remotely authenticates the TEE before participating in the training. By using multiple decentralized aggregators, parties are enabled to partition their respective model updates at model-parameter granularity, and can map single weights to a specific aggregator entity. Parties also can dynamically shuffle fragmentary model updates at each training iteration to further obfuscate the information dispatched to each aggregator execution entity.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Applicant: International Business Machines Corporation
    Inventors: Zhongshu Gu, Jayaram Kallapalayam Radhakrishnan, Ashish Verma, Enriquillo Valdez, Pau-Chen Cheng, Hani Talal Jamjoom, Kevin Eykholt
  • Patent number: 11443182
    Abstract: Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: September 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
  • Publication number: 20220269942
    Abstract: Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.
    Type: Application
    Filed: May 13, 2022
    Publication date: August 25, 2022
    Inventors: Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Dimitrios Pendarakis, Ian M. Molloy
  • Publication number: 20220207137
    Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.
    Type: Application
    Filed: March 14, 2022
    Publication date: June 30, 2022
    Inventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
  • Patent number: 11373093
    Abstract: Adversarial input detection and purification (AIDAP) preprocessor and deep learning computer model mechanisms are provided. The deep learning computer model receives input data and processes it to generate a first pass output that is output to the AIDAP preprocessor. The AIDAP preprocessor determines a discriminative region of the input data based on the first pass output and transforms a subset of elements in the discriminative region to modify a characteristic of the elements and generate a transformed input data. The deep learning computer model processes the transformed input data to generate a second pass output that is output to the AIDAP preprocessor which detects an adversarial input or not based on a comparison of the first pass and second pass outputs. If an adversarial input is detected, a responsive action that mitigates effects of the adversarial input is performed.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: June 28, 2022
    Assignee: International Business Machines Corporation
    Inventors: Zhongshu Gu, Hani T. Jamjoom
  • Publication number: 20220156563
    Abstract: A method, apparatus and computer program product to protect a deep neural network (DNN) having a plurality of layers including one or more intermediate layers. In this approach, a training data set is received. During training of the DNN using the received training data set, a representation of activations associated with an intermediate layer is recorded. For at least one or more of the representations, a separate classifier (model) is trained. The classifiers, collectively, are used to train an outlier detection model. Following training, the outliner detection model is used to detect an adversarial input on the deep neural network. The outlier detection model generates a prediction, and an indicator whether a given input is the adversarial input. According to a further aspect, an action is taken to protect a deployed system associated with the DNN in response to detection of the adversary input.
    Type: Application
    Filed: November 17, 2020
    Publication date: May 19, 2022
    Applicant: International Business Machines Corporation
    Inventors: Jialong Zhang, Zhongshu Gu, Jiyong Jang, Marc Philippe Stoecklin, Ian Michael Molloy
  • Patent number: 11301563
    Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: April 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
  • Patent number: 11184374
    Abstract: An automated method for cyberattack detection and prevention in an endpoint. The technique monitors and protects the endpoint by recording inter-process events, creating an inter-process activity graph based on the recorded inter-process events, matching the inter-process activity (as represented in the activity graph) against known malicious or suspicious behavior (as embodied in a set of one or more pattern graphs), and performing a post-detection operation in response to a match between an inter-process activity and a known malicious or suspicious behavior pattern. Preferably, matching involves matching a subgraph in the activity graph with a known malicious or suspicious behavior pattern as represented in the pattern graph. During this processing, preferably both direct and indirect inter-process activities at the endpoint (or across a set of endpoints) are compared to the known behavior patterns.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: November 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Xiaokui Shu, Zhongshu Gu, Heqing Huang, Marc Philippe Stoecklin, Jialong Zhang
  • Patent number: 11163860
    Abstract: A framework to accurately and quickly verify the ownership of remotely-deployed deep learning models is provided without affecting model accuracy for normal input data. The approach involves generating a watermark, embedding the watermark in a local deep neural network (DNN) model by learning, namely, by training the local DNN model to learn the watermark and a predefined label associated therewith, and later performing a black-box verification against a remote service that is suspected of executing the DNN model without permission. The predefined label is distinct from a true label for a data item in training data for the model that does not include the watermark. Black-box verification includes simply issuing a query that includes a data item with the watermark, and then determining whether the query returns the predefined label.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Zhongshu Gu, Heqing Huang, Marc Phillipe Stoecklin, Jialong Zhang
  • Patent number: 11144642
    Abstract: A computer-implemented method, a computer program product, and a computer system. The computer system installs and configures a virtual imitating resource in the computer system, wherein the virtual imitating resource imitates a set of resources in the computer system. Installing and configuring the virtual imitating resource includes modifying respective values of an installed version of the virtual imitating resource for an environment of the computer system, determining whether the virtual imitating resource is a static imitating resource or a dynamic imitating resource, and comparing a call graph of the evasive malware with patterns of dynamic imitating resources on a database. The computer system returns a response from an appropriate element of the virtual imitating resource, in response to a call from the evasive malware to a real computing resource, return, by the computer system.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Zhongshu Gu, Heqing Huang, Jiyong Jang, Dhilung Hang Kirat, Xiaokui Shu, Marc P. Stoecklin, Jialong Zhang
  • Publication number: 20210248443
    Abstract: Mechanisms for identifying a pattern of computing resource activity of interest, in activity data characterizing activities of computer system elements, are provided. A temporal graph of the activity data is generated and a filter is applied to the temporal graph to generate one or more first vector representations, each characterizing nodes and edges within a moving window defined by the filter. The filter is applied to a pattern graph representing a pattern of entities and events indicative of the pattern of interest, to generate a second vector representation. The second vector representation is compared to the one or more first vector representations to identify one or more nearby vectors, and one or more corresponding subgraph instances are output to an intelligence console computing system as inexact matches of the temporal graph.
    Type: Application
    Filed: February 6, 2020
    Publication date: August 12, 2021
    Inventors: Xiaokui Shu, Zhongshu Gu, Marc P. Stoecklin, Hani T. Jamjoom
  • Publication number: 20210232933
    Abstract: Mechanisms are provided to implement a neural flow attestation engine and perform computer model execution integrity verification based on neural flows. Input data is input to a trained computer model that includes a plurality of layers of neurons. The neural flow attestation engine records, for a set of input data instances in the input data, an output class generated by the trained computer model and a neural flow through the plurality of layers of neurons to thereby generate recorded neural flows. The trained computer model is deployed to a computing platform, and the neural flow attestation engine verifies the execution integrity of the deployed trained computer model based on a runtime neural flow of the deployed trained computer model and the recorded neural flows.
    Type: Application
    Filed: January 23, 2020
    Publication date: July 29, 2021
    Inventors: Zhongshu Gu, Xiaokui Shu, Hani Jamjoom, Tengfei Ma
  • Patent number: 10904246
    Abstract: Mechanisms are provided to implement a single input, multi-factor authentication (SIMFA) system. The SIMFA system receives a user input for authenticating a user via a single input channel and provides the user input to first authentication logic of an explicit channel of the SIMFA system, where in the first authentication logic performs a knowledge authentication operation on the user input. The SIMFA system further provides the user input to second authentication logic of one or more side channels of the SIMFA system, where the second authentication logic performs authentication on non-knowledge-based characteristics of the user input. The SIMFA system combines results of the first authentication logic and the second authentication logic to generate a final determination of authenticity of the user. The SIMFA system generates an output indicating whether the user is an authentic user or a non-authentic user based on the final determination of authenticity of the user.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Suresh Chari, Zhongshu Gu, Heqing Huang, Dimitrios Pendarakis
  • Publication number: 20200410335
    Abstract: Adversarial input detection and purification (AIDAP) preprocessor and deep learning computer model mechanisms are provided. The deep learning computer model receives input data and processes it to generate a first pass output that is output to the AIDAP preprocessor. The AIDAP preprocessor determines a discriminative region of the input data based on the first pass output and transforms a subset of elements in the discriminative region to modify a characteristic of the elements and generate a transformed input data. The deep learning computer model processes the transformed input data to generate a second pass output that is output to the AIDAP preprocessor which detects an adversarial input or not based on a comparison of the first pass and second pass outputs. If an adversarial input is detected, a responsive action that mitigates effects of the adversarial input is performed.
    Type: Application
    Filed: June 26, 2019
    Publication date: December 31, 2020
    Inventors: Zhongshu Gu, Hani T. Jamjoom
  • Publication number: 20200293653
    Abstract: Mechanisms are provided for detecting abnormal system call sequences in a monitored computing environment. The mechanisms receive, from a computing system resource of the monitored computing environment, a system call of an observed system call sequence for evaluation. A trained recurrent neural network (RNN), trained to predict system call sequences, processes the system call to generate a prediction of a subsequent system call in a predicted system call sequence. Abnormal call sequence logic compares the subsequent system call in the predicted system call sequence to an observed system call in the observed system call sequence and identifies a difference between the predicted system call sequence and the observed system call sequence based on results of the comparing. The abnormal call sequence logic generates an alert notification in response to identifying the difference.
    Type: Application
    Filed: March 13, 2019
    Publication date: September 17, 2020
    Inventors: Heqing Huang, Taesung Lee, Ian M. Molloy, Zhongshu Gu, Jialong Zhang, Josyula R. Rao
  • Patent number: 10631168
    Abstract: Advanced persistent threats to a mobile device are detected and prevented by leveraging the built-in mandatory access control (MAC) environment in the mobile operating system in a “stateful” manner. To this end, the MAC mechanism is placed in a permissive mode of operation wherein permission denials are logged but not enforced. The mobile device security environment is augmented to include a monitoring application that is instantiated with system privileges. The application monitors application execution parameters of one or more mobile applications executing on the device. These application execution parameters including, without limitation, the permission denials, are collected and used by the monitoring application to facilitate a stateful monitoring of the operating system security environment. By assembling security-sensitive events over a time period, the system identifies an advanced persistent threat (APT) that otherwise leverages multiple steps using benign components.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: April 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Suresh Chari, Zhongshu Gu, Heqing Huang, Xiaokui Shu, Jialong Zhang
  • Publication number: 20200120118
    Abstract: An automated method for cyberattack detection and prevention in an endpoint. The technique monitors and protects the endpoint by recording inter-process events, creating an inter-process activity graph based on the recorded inter-process events, matching the inter-process activity (as represented in the activity graph) against known malicious or suspicious behavior (as embodied in a set of one or more pattern graphs), and performing a post-detection operation in response to a match between an inter-process activity and a known malicious or suspicious behavior pattern. Preferably, matching involves matching a subgraph in the activity graph with a known malicious or suspicious behavior pattern as represented in the pattern graph. During this processing, preferably both direct and indirect inter-process activities at the endpoint (or across a set of endpoints) are compared to the known behavior patterns.
    Type: Application
    Filed: October 12, 2018
    Publication date: April 16, 2020
    Applicant: International Business Machines Corporation
    Inventors: Xiaokui Shu, Zhongshu Gu, Heqing Huang, Marc Philippe Stoecklin, Jialong Zhang
  • Publication number: 20200089879
    Abstract: A computer-implemented method, a computer program product, and a computer system. The computer system installs and configures a virtual imitating resource in the computer system, wherein the virtual imitating resource imitates a set of resources in the computer system. Installing and configuring the virtual imitating resource includes modifying respective values of an installed version of the virtual imitating resource for an environment of the computer system, determining whether the virtual imitating resource is a static imitating resource or a dynamic imitating resource, and comparing a call graph of the evasive malware with patterns of dynamic imitating resources on a database. The computer system returns a response from an appropriate element of the virtual imitating resource, in response to a call from the evasive malware to a real computing resource, return, by the computer system.
    Type: Application
    Filed: November 25, 2019
    Publication date: March 19, 2020
    Inventors: ZHONGSHU GU, HEQING HUANG, JIYONG JANG, DHILUNG HANG KIRAT, XIAOKUI SHU, MARC P. STOECKLIN, JIALONG ZHANG