Patents by Inventor Guang Han

Guang Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230185903
    Abstract: A first memory page in a memory of the computer is allocated as a first stack to buffer meta data for function calls in the program. A memory protection key for the first memory page is generated. A second memory page in the memory is allocated as a second stack to buffer user data for function calls in the program.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Inventors: Naijie Li, Jing Lu, Ming Ran Liu, Xiao Yan Tang, Yuan Zhai, Guang Han Sui
  • Patent number: 11663072
    Abstract: A computer-implemented method includes receiving, by a computing system, an update for a computer program executing on the computing system. The method further includes determining, by the computing system, a data structure that is affected by the update by checking a structure change information included in the update. The method further includes checking, by the computing system, instance-count of the data structure, the instance-count representing a number of instances of the data structure in a memory of the computing system. The method further includes based on a determination that the instance-count is zero, applying, by the computing system, the update to the computer program.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: May 30, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gan Zhang, Le Chang, Ming Lei Zhang, Xing Xing Shen, Shan Gao, Guang Han Sui, Zeng Yu Peng
  • Patent number: 11650737
    Abstract: A computer-implemented method comprises initializing a plurality of segment lists. Each segment list of the plurality of segment lists corresponds to a respective one of a plurality of disk drives. Each segment list divides storage space of the respective disk drive into a plurality of segments. The method further comprises, for each of the plurality of disk drives, identifying one or more candidate segments from the plurality of segments; calculating a respective segment distance variance for one or more combinations of identified candidate segments. Each combination of identified candidate segments includes one candidate segment for each of the plurality of disk drives. The method further comprises selecting a combination of the one or more combinations of identified candidate segments having the smallest respective segment distance variance; and storing data on the plurality of disk drives according to the selected combination of identified candidate segments.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: May 16, 2023
    Assignee: International Business Machines Corporation
    Inventors: Lin Feng Shen, Ji Dong Li, Yong Zheng, Guang Han Sui, Shuo Feng, Hai Zhong Zhou, Yu Bing Tang, Wu Xu
  • Publication number: 20230132831
    Abstract: The present invention relates to a method, system and computer program product for task failover in an unstable environment, wherein the unstable environment includes a plurality of reclaimable nodes. According to the method, it is monitored if any node of the plurality of reclaimable nodes is to be reclaimed. Whether a task on any node of the plurality of reclaimable nodes is recoverable is determined. Responsive to the task being recoverable, data of the recoverable task is stored. Responsive to a node being reclaimed and the task on the reclaimed node being recoverable, at least one associated task of at least one associated node of the reclaimed node is notified to wait.
    Type: Application
    Filed: October 29, 2021
    Publication date: May 4, 2023
    Inventors: Guang Han Sui, Wei Ge, Lan Zhe Liu, Zhang Li Ping, ER TAO ZHAO
  • Publication number: 20230114504
    Abstract: Aspects of the invention include receiving, by a controller, a workload comprising one or more tasks, generating a first pod comprising a first sidecar container, generating one or more ephemeral containers for the first pod based on the workload and one or more resource allocation metrics for the pod, executing the one or more tasks in the one or more ephemeral containers, monitoring the one or more resource allocation metrics for the pod, and generating at least one new ephemeral container in the first pod based on the one or more resource allocation metrics for the pod and the workload.
    Type: Application
    Filed: October 11, 2021
    Publication date: April 13, 2023
    Inventors: Jin Chi JC HE, Guang Han SUI, Peng LI, Gang PU, Gang WANG, Liang WANG
  • Publication number: 20230102645
    Abstract: Reusing containers is provided. It is communicated to a pipeline workload manager that a particular container has finished running a step of a pipeline workload using an agent daemon of the particular container. Pipeline workload information corresponding to the pipeline workload is checked using the pipeline workload manager to determine whether the particular container can be reused to run a particular step in a different pipeline workload. The particular container is provided to be reused to run the particular step in the different pipeline workload without having to perform a prepare container environment sub-step of that particular step based on determining that the particular container can be reused to run that particular step in the different pipeline workload according to the pipeline workload information.
    Type: Application
    Filed: September 30, 2021
    Publication date: March 30, 2023
    Inventors: Guang Han Sui, Jin Chi JC He, Peng Hui Jiang, Jun Su
  • Publication number: 20230091512
    Abstract: A computer-implemented method includes receiving, by a computing system, an update for a computer program executing on the computing system. The method further includes determining, by the computing system, a data structure that is affected by the update by checking a structure change information included in the update. The method further includes checking, by the computing system, instance-count of the data structure, the instance-count representing a number of instances of the data structure in a memory of the computing system. The method further includes based on a determination that the instance-count is zero, applying, by the computing system, the update to the computer program.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Gan ZHANG, Le CHANG, Ming Lei ZHANG, Xing Xing SHEN, Shan GAO, Guang Han SUI, Zeng Yu PENG
  • Publication number: 20230081324
    Abstract: A computer-implemented method includes receiving, by a processing unit, from a first tenant, a query to retrieve data from a nonrelational database system. The method further includes determining, by the processing unit, that an index associated with the query is cached in a shared index cache, wherein the shared index cache stores indexes for a plurality of tenants. The method further includes retrieving, by the processing unit, a result of the query based on the index in the shared index cache. The method further includes outputting, by the processing unit, the result of the query.
    Type: Application
    Filed: September 15, 2021
    Publication date: March 16, 2023
    Inventors: Peng Hui JIANG, Xing Xing SHEN, Guang Han SUI, Jun SU, Hai Ling ZHANG
  • Publication number: 20230084206
    Abstract: Embodiments relate to methods, systems, and computer program products for path management in a processing system. In a method, in response to receiving a request for adding a target controlling unit into a processing system, a plurality of network nodes in the processing system are divided into a group of subnets based on a topology of the plurality of network nodes, the plurality of network nodes being connected to at least one controlling unit in the processing system. A workload estimation is determined, the workload estimation representing a workload to be caused by the target controlling unit to the processing system. A target subnet is selected from the group of subnets for connecting the target controlling unit into the processing system based on the workload estimation. With these embodiments, the target subnet may be selected in an automatic way such that the performance of the processing system may be increased.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 16, 2023
    Inventors: YAN HUANG, HENG WANG, KAI FENG, ZHENG LEI AN, SHUANG SHUANG JIA, XIAO LING CHEN, GUANG HAN SUI, LEI WANG
  • Patent number: 11586631
    Abstract: An embodiment includes deriving usage data associated with records of a database by monitoring requests to perform read operations on the records of the database. The embodiment generates record correlation data representative of correlations between respective groups of records of the database by parsing the usage data associated with the records of the database. The embodiment stores a plurality of records received as respective write requests during a first time interval in an intermediate storage medium. The embodiment identifies a correlation in the record correlation data between a first record of the plurality of records and a second record of the plurality of records. The embodiment selects, responsive to identifying the correlation, a first location in the database for writing the first record and a second location in the database for writing the second record based on a proximity of the first location to the second location.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Guang Han Sui, Peng Hui Jiang, Jia Tian Zhong, Jun Su
  • Patent number: 11586480
    Abstract: A set of workload criteria is determined from a workload associated with a plurality of sources. The workload is divided among a set of workload groups according to the set of workload criteria and a first workload scheduler. A set of edge computing resources is assigned to each workload group within the set according to the set of workload criteria and the set of workload groups. A portion of the workload associated with a subset of the plurality of sources is handled by a first subset of edge computing resources and a second workload scheduler, where the subset of sources is associated with a first workload group. The handling includes balancing, by the second workload scheduler, the portion of the workload among the subset of sources. The handled workload is reported to a control center.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: February 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Guang Han Sui, Jing Li, Bin Xu, Fei Qi
  • Patent number: 11573831
    Abstract: Embodiments for optimizing resource usage in a distributed computing environment. Resource usage of each task in a set of running tasks associated with a job is monitored to collect resource usage information corresponding to each respective task. A resource unit size of at least one resource allocated to respective tasks in the set of running tasks is adjusted based on the resource usage information to improve overall resource usage in the distributed computing environment.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: February 7, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jie Li, Zhimin Lin, Jinming Lv, Guang Han Sui, Hao Zhou
  • Publication number: 20230031636
    Abstract: Aspects of the invention include systems and methods configured to provide simplified and efficient artificial intelligence (AI) model deployment. A non-limiting example computer-implemented method includes receiving an AI model deployment input having pre-process code, inference model code, and post-process code. The pre-process code is converted to a pre-process graph. The inference model and the post-process model are similarly converted to an inference graph and a post-process graph, respectively. A pipeline path is generated by connecting nodes in the pre-process graph, the inference graph, and the post-process graph. The pipeline path is deployed as a service for inference.
    Type: Application
    Filed: July 28, 2021
    Publication date: February 2, 2023
    Inventors: Lin Dong, Dong Xie, Jing Li, Guang Han Sui, Xiao Tian Xu
  • Publication number: 20230016582
    Abstract: A block of data intended for a set of receiving computer systems comprising a first system and a second system is divided into a set of equal-size portions. A first portion of the set of portions is transmitted from a first file server storing the block of data to the first system. The first portion is relayed from the first file server to a second file server concurrently with the transmitting. The first portion of the set of portions is transmitted from the second file server to the second system.
    Type: Application
    Filed: July 15, 2021
    Publication date: January 19, 2023
    Applicant: International Business Machines Corporation
    Inventors: Guang Han Sui, Wei Ge, Juan Yang, Lan Zhe Liu, Le Yao, Li Jun BJ Zhu
  • Patent number: 11558448
    Abstract: In an approach for a sparse information sharing system, a processor receives a request from a host owner for a host to become a server of an information sharing system, wherein the request specifies at least one type of information the server will maintain and provide to visitors of the server. A processor syncs the server with other servers of the information sharing system with information of the specified at least one type of information. A processor, responsive to the server receiving updated information from a visitor of the server, notifies the other servers of the updated information.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: January 17, 2023
    Assignee: International Business Machines Corporation
    Inventors: Guang Han Sui, Peng Hui Jiang, Xing Xing Shen, Jun Su, Hai Ling Zhang
  • Publication number: 20230006878
    Abstract: In an approach for building file server arrays with stable and unstable nodes for enhanced pipeline transmission, a processor builds an array from a plurality of stable nodes, wherein each stable node of the plurality of stable nodes is linked to two other stable nodes of the plurality of stable nodes forming a line. A processor divides a plurality of unstable nodes into one or more groups of unstable nodes. A processor links each group of unstable nodes to two neighboring stable nodes within the array. A processor sends data through the array and the one or more groups of unstable nodes in two opposite directions. A processor monitors a node status for each node of the plurality of stable nodes and the plurality of unstable nodes.
    Type: Application
    Filed: June 28, 2021
    Publication date: January 5, 2023
    Inventors: Guang Han Sui, Zhi Gang Sun, Yu Jing, Xin Peng Liu
  • Publication number: 20220413925
    Abstract: Methods, computer program products, and/or systems are provided that perform the following operations: identifying, in an environment that includes a plurality of edge clusters of edge nodes, a first edge cluster having a resource gap; broadcasting a resource requirement of the first edge cluster to other edge clusters in the plurality; obtaining resource commitments from one or more of the other edge clusters; selecting edge cluster resources from the one or more of the other edge clusters based, at least in part, on the resource commitments; and creating a new cluster including the first edge cluster and the selected edge cluster resources.
    Type: Application
    Filed: June 25, 2021
    Publication date: December 29, 2022
    Inventors: Guang Ya Liu, Guang Han Sui, Xun Pan, Xiao Liang Hu
  • Publication number: 20220398250
    Abstract: An embodiment includes deriving usage data associated with records of a database by monitoring requests to perform read operations on the records of the database. The embodiment generates record correlation data representative of correlations between respective groups of records of the database by parsing the usage data associated with the records of the database. The embodiment stores a plurality of records received as respective write requests during a first time interval in an intermediate storage medium. The embodiment identifies a correlation in the record correlation data between a first record of the plurality of records and a second record of the plurality of records. The embodiment selects, responsive to identifying the correlation, a first location in the database for writing the first record and a second location in the database for writing the second record based on a proximity of the first location to the second location.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 15, 2022
    Applicant: International Business Machines Corporation
    Inventors: Guang Han Sui, Peng Hui Jiang, Jia Tian Zhong, Jun Su
  • Patent number: 11526379
    Abstract: Embodiments of the present disclosure relate to a method for building an application. According to the method, a request is received from a building environment to acquire at least one component for executing at least one function of at least one feature of the application. The at least one feature is to be deployed to at least one target node in a distributed service platform comprising a plurality of nodes. The at least one target node and the at least one component are determined based on the request. The at least one component is acquired from the at least one target node. The at least one component is sent to the building environment for building the at least one feature.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: December 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ping Xiao, Peng Hui Jiang, Xin Peng Liu, Guang Han Sui
  • Publication number: 20220365907
    Abstract: In an approach to building a file system for multiple architectures, responsive to receiving a manifest for a file system build, a base layer is retrieved for each platform to be built, where the base layer is an operating system base. Responsive to determining that any layer to be built has not been built, the next layer to be built is retrieved. Responsive to the next layer to be built is platform-independent, the next layer is retrieved from a cache, where the next layer supports each platform. Responsive to the next layer to be built is platform-dependent, the next layer is built, where a copy of the next layer is built for each platform. The above steps are iteratively repeated until each layer is built. A single image of a completed file system build is stored, where the single image supports each platform.
    Type: Application
    Filed: May 13, 2021
    Publication date: November 17, 2022
    Inventors: Jin Chi JC He, Guang Han Sui, Ke Zhang, Yang Gao, Yu Xing YX Ren, Liang Wang