Patents by Inventor QIN YUE CHEN

QIN YUE CHEN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240069976
    Abstract: A serverless computing-based, continuous gateway watch of a data store for data change process is provided. The process includes the gateway interface of the computing environment receiving a watch request from a user system to monitor the data store for data change. Based on receiving the watch request, the gateway interface invokes a serverless setup service to establish a connection between the gateway interface and the data store of the computing environment to be monitored for data change. Based on receiving, at the gateway interface, a data change indication from the data store, the gateway interface invokes a serverless message process service to mutate the data change indication from the data store into a mutated data change message indicative of a data change at the data store for return to the user system pursuant to the watch request, with the serverless message process service terminating thereafter.
    Type: Application
    Filed: August 24, 2022
    Publication date: February 29, 2024
    Inventors: Gang TANG, Peng Hui JIANG, Ming Shuang XIAN, Qin Yue CHEN
  • Patent number: 11880653
    Abstract: A user requests explanation of a term. In response, a definition is provided. The user can indicate that the user does not understand a new term included in the definition. In response, explanation information is customized based on analysis of the initial term and the new term, and then the explanation information is provided to the user.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: January 23, 2024
    Assignee: International Business Machines Corporation
    Inventors: Ya Niu, Nan Nan Li, Zu Rui Li, Li ping Wang, Di Hu, Qin Yue Chen
  • Patent number: 11853786
    Abstract: A method, computer program product, and a system where a processor(s), in a computing environment comprised of multiple containers comprising modules, includes a processor(s) parsing a module originating from a given container in the computing environment by copying various identifying aspects of a module file comprising the module and calculating, based on contents of the module file, a digest value as a unique identifier for the module file. The processor(s) stores the various identifying aspects of the module file and the digest value in one or more memory objects, wherein the one or more memory objects comprise a module content map to correlate the unique identifier for the module file with the contents of the module, images in the module file with the unique identifier for the module file, and layers with the unique identifier for the module file.
    Type: Grant
    Filed: June 17, 2021
    Date of Patent: December 26, 2023
    Assignee: International Business Machines Corporation
    Inventors: Qin Yue Chen, Shu Han Weng, Yong Xin Qi, Zhi Hong Li, Xi Xue Jia
  • Patent number: 11775317
    Abstract: Embodiments for locating performance hot spots include collecting sample data having instruction addresses, the sample data being for a neural network model and determining instructions in the instruction addresses that are performance hot spots. A listing file is used to map the instructions of the sample data that are performance hot spots to locations in a lower-level intermediate representation. A mapping file is used to map the locations of the lower-level intermediate representation that are performance hot spots to operations in one or more higher-level representations, one or more of the operations corresponding to the performance hot spots, the mapping file being generated from compiling the neural network model.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: October 3, 2023
    Assignee: International Business Machines Corporation
    Inventors: Qin Yue Chen, Li Cao, Fei Fei Li, Han Su
  • Patent number: 11656864
    Abstract: Automatic application of software patches to software associated with container images based upon image relationships in a dependency tree. The computing device determines whether software associated with a base container image requires software patches. The computing device accesses dependency trees maintaining image relationships between the base container image and dependent container images. The computing device determines based upon the accessed one or more dependency trees whether the base container image has dependent container images derived from the base container image. The computing device applies software patches to the software associated with the base container image. The computing device rebuilds the base container image with the applied software patches. The computing device then rebuilds the dependent container images dependent upon the rebuilt base container image.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: May 23, 2023
    Assignee: International Business Machines Corporation
    Inventors: Qin Yue Chen, Xin Peng Liu, Han Su, Fei Fei Li
  • Publication number: 20230091915
    Abstract: Automatic application of software patches to software associated with container images based upon image relationships in a dependency tree. The computing device determines whether software associated with a base container image requires software patches. The computing device accesses dependency trees maintaining image relationships between the base container image and dependent container images. The computing device determines based upon the accessed one or more dependency trees whether the base container image has dependent container images derived from the base container image. The computing device applies software patches to the software associated with the base container image. The computing device rebuilds the base container image with the applied software patches. The computing device then rebuilds the dependent container images dependent upon the rebuilt base container image.
    Type: Application
    Filed: September 22, 2021
    Publication date: March 23, 2023
    Inventors: QIN YUE CHEN, Xin Peng Liu, Han Su, Fei Fei Li
  • Publication number: 20220350619
    Abstract: Embodiments for locating performance hot spots include collecting sample data having instruction addresses, the sample data being for a neural network model and determining instructions in the instruction addresses that are performance hot spots. A listing file is used to map the instructions of the sample data that are performance hot spots to locations in a lower-level intermediate representation. A mapping file is used to map the locations of the lower-level intermediate representation that are performance hot spots to operations in one or more higher-level representations, one or more of the operations corresponding to the performance hot spots, the mapping file being generated from compiling the neural network model.
    Type: Application
    Filed: April 30, 2021
    Publication date: November 3, 2022
    Inventors: QIN YUE CHEN, Li Cao, Fei Fei Li, Han Su
  • Publication number: 20220342686
    Abstract: VM file management includes detecting a user request to access to a virtual machine (VM) and searching a pre-defined list to determine whether the user requesting access is identified on the list. If so, a file-level snapshot is generated prior to enabling modification of a VM file by the user. The file-level snapshot includes a user attribute and is added as the top layer of a stack. The user attribute indicates a role of the user for whom the file-level snapshot is created. Each layer of the stack contains one or more other file-level snapshots. The VM file is written in the file's entirety to the snapshot in response to the user modifying the VM file. Based on the user attribute of each snapshot, a set of snapshots is selected from the stack, and the VM is modified by merging the VM files belonging to the set of snapshots selected.
    Type: Application
    Filed: April 23, 2021
    Publication date: October 27, 2022
    Inventors: Da Long Wang, Qin Yue Chen, Xue Lian Feng, Yang Liang, Yang Yang Feng, Bin Xiong
  • Publication number: 20220188511
    Abstract: A user requests explanation of a term. In response, a definition is provided. The user can indicate that the user does not understand a new term included in the definition. In response, explanation information is customized based on analysis of the initial term and the new term, and then the explanation information is provided to the user.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Inventors: Ya Niu, Nan Nan Li, Zu Rui Li, Li Ping Wang, Di Hu, Qin Yue Chen
  • Patent number: 11188345
    Abstract: A method for network communication across application containers in a computer server system includes executing, by a computer system, a host operating system (OS). The host OS is an instance of an OS. The host OS includes multiple application containers operatively coupled to a memory. The method further includes executing, by the host OS, a virtual network interface for each of the application containers. The method further includes implementing, by the host OS, a remote direct memory access (RDMA) transparently for communications amongst the application containers by utilizing shared memory communications.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: November 30, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Qin Yue Chen, Han Su, Feifei Li, Yu Zhuo Sun, Chao Jun Wei
  • Patent number: 11144362
    Abstract: Embodiments of the present disclosure relate to a computer-implemented method for container scheduling in a container orchestration system (COS). According to the method, a new unit comprising one or more containers are detected. Available memory for each of a plurality of candidate nodes deployed in the COS is predicted based on page sharing information of each candidate node. The plurality of candidate nodes filtered to obtain a set of filtered nodes are, wherein the available memory of each of the set of filtered node meets a memory size limitation of the new unit. Priorities of the set of filtered nodes are ranked according to one or more priority functions. The new unit is deployed to one of the filtered nodes based on the priorities.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Qin Yue Chen, Han Su, Feifei Li, Chang Xin Miao
  • Publication number: 20210311771
    Abstract: A method, computer program product, and a system where a processor(s), in a computing environment comprised of multiple containers comprising modules, includes a processor(s) parsing a module originating from a given container in the computing environment by copying various identifying aspects of a module file comprising the module and calculating, based on contents of the module file, a digest value as a unique identifier for the module file. The processor(s) stores the various identifying aspects of the module file and the digest value in one or more memory objects, wherein the one or more memory objects comprise a module content map to correlate the unique identifier for the module file with the contents of the module, images in the module file with the unique identifier for the module file, and layers with the unique identifier for the module file.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 7, 2021
    Inventors: Qin Yue Chen, Shu Han Weng, Yong Xin Qi, Zhi Hong Li, Xi Xue Jia
  • Patent number: 11080050
    Abstract: A method, computer system, and computer program product for accelerating class data loading in a containers environment are provided. In response to a first container in a containers environment being created from a first image, at least one archive file containing a set of classes from the first image can be loaded. Then a respective class sharing file for each of the at least one archive file can be generated. The class sharing file is stored in a shared location. A second container in the containers environment is created from a second image. If a class sharing file from the archive is found in the shared location, that class sharing file can be used.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: August 3, 2021
    Assignee: International Business Machines Corporation
    Inventors: Qin Yue Chen, Yong Xin Qi, Qi Liang, Shuai Wang
  • Patent number: 11055127
    Abstract: A method, computer program product, and a system where a processor(s), in a computing environment comprised of multiple containers comprising modules, includes a processor(s) parsing a module originating from a given container in the computing environment by copying various identifying aspects of a module file comprising the module and calculating, based on contents of the module file, a digest value as a unique identifier for the module file. The processor(s) stores the various identifying aspects of the module file and the digest value in one or more memory objects, wherein the one or more memory objects comprise a module content map to correlate the unique identifier for the module file with the contents of the module, images in the module file with the unique identifier for the module file, and layers with the unique identifier for the module file.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: July 6, 2021
    Assignee: International Business Machines Corporation
    Inventors: Qin Yue Chen, Shu Han Weng, Yong Xin Qi, Zhi Hong Li, Xi Xue Jia
  • Patent number: 10929305
    Abstract: This disclosure provides methods, systems and computer program products for page sharing among a plurality of containers running on a host. The method comprises in response to a first container accessing a first file not cached by the first container, checking whether a second file equivalent to the first file is shared in a memory of the host by a second container, wherein the checking is based on a record in which related information of at least one shared file is stored. The method further comprises in response to the checking indicating there is no second file, allocating in the memory at least one page for the first file, loading the first file into the at least one page, and storing related information of the first file into the record.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: February 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Qin Yue Chen, Chao Jun Wei, Han Su, Fei Fei Li
  • Publication number: 20200394048
    Abstract: A method for network communication across application containers in a computer server system includes executing, by a computer system, a host operating system (OS). The host OS is an instance of an OS. The host OS includes multiple application containers operatively coupled to a memory. The method further includes executing, by the host OS, a virtual network interface for each of the application containers. The method further includes implementing, by the host OS, a remote direct memory access (RDMA) transparently for communications amongst the application containers by utilizing shared memory communications.
    Type: Application
    Filed: June 17, 2019
    Publication date: December 17, 2020
    Inventors: QIN YUE CHEN, HAN SU, FEIFEI LI, YU ZHUO SUN, CHAO JUN WEI
  • Publication number: 20200257634
    Abstract: This disclosure provides methods, systems and computer program products for page sharing among a plurality of containers running on a host. The method comprises in response to a first container accessing a first file not cached by the first container, checking whether a second file equivalent to the first file is shared in a memory of the host by a second container, wherein the checking is based on a record in which related information of at least one shared file is stored. The method further comprises in response to the checking indicating there is no second file, allocating in the memory at least one page for the first file, loading the first file into the at least one page, and storing related information of the first file into the record.
    Type: Application
    Filed: February 13, 2019
    Publication date: August 13, 2020
    Inventors: Qin Yue Chen, Chao Jun Wei, Han Su, Fei Fei Li
  • Publication number: 20200241875
    Abstract: A method, computer system, and computer program product for accelerating class data loading in a containers environment are provided. In response to a first container in a containers environment being created from a first image, at least one archive file containing a set of classes from the first image can be loaded. Then a respective class sharing file for each of the at least one archive file can be generated. The class sharing file is stored in a shared location. A second container in the containers environment is created from a second image. If a class sharing file from the archive is found in the shared location, that class sharing file can be used.
    Type: Application
    Filed: January 29, 2019
    Publication date: July 30, 2020
    Inventors: Qin Yue Chen, Yong Xin Qi, Qi Liang, Shuai Wang
  • Publication number: 20200065124
    Abstract: According to one or more embodiments of the present invention, a computer-implemented method for shortening just-in-time compilation time includes creating a first container for executing a first computer program, the execution comprising generating, using a just-in-time compiler, a compiled code for a first code-portion of the first computer program. The method further includes storing the compiled code for the first code-portion in a code-share store. The method further includes creating a second container for executing a second computer program comprising a second code-portion. The method further includes determining that the second code-portion matches the first code-portion, and in response retrieving the compiled code from the code-share store for executing the second computer program.
    Type: Application
    Filed: August 22, 2018
    Publication date: February 27, 2020
    Inventors: QIN YUE CHEN, QI LIANG, GUI YU JIANG, XIN LIU, CHANG XIN MIAO, XING TANG, Fei Fei Li, Su Han
  • Publication number: 20200034170
    Abstract: A method, computer program product, and a system where a processor(s), in a computing environment comprised of multiple containers comprising modules, includes a processor(s) parsing a module originating from a given container in the computing environment by copying various identifying aspects of a module file comprising the module and calculating, based on contents of the module file, a digest value as a unique identifier for the module file. The processor(s) stores the various identifying aspects of the module file and the digest value in one or more memory objects, wherein the one or more memory objects comprise a module content map to correlate the unique identifier for the module file with the contents of the module, images in the module file with the unique identifier for the module file, and layers with the unique identifier for the module file.
    Type: Application
    Filed: July 25, 2018
    Publication date: January 30, 2020
    Inventors: Qin Yue Chen, Shu Han Weng, Yong Xin Qi, Zhi Hong Li, Xi Xue Jia