Patents by Inventor Xiong Gao

Xiong Gao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941434
    Abstract: A task processing method, a processing apparatus, and a computer system are provided. Implementation of the method includes: generating, by a first processing apparatus, a plurality of tasks, and determining task description information of the plurality of tasks, where the task description information of the plurality of tasks is used to indicate a dependency relationship between the plurality of tasks; sending an instruction to a second processing apparatus, where the instruction includes the plurality of tasks and the task description information of the plurality of tasks; and receiving the instruction, and processing the plurality of tasks based on the dependency relationship between the plurality of tasks. The method can effectively reduce a waiting delay, fully exploit a computing capability of an acceleration chip, and improve task processing efficiency.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: March 26, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Wei Li, Xiong Gao, Hou Fun Lam, Tao Ma
  • Patent number: 11943221
    Abstract: Aspects of the invention include systems and methods configured to prevent masquerading service attacks. A non-limiting example computer-implemented method includes sending, from a first server in a cloud environment, a communication request comprising an application programming interface (API) key and a first server identifier to an identity and access management (IAM) server of the cloud environment. The API key can be uniquely assigned by the IAM server to a first component of the first server. The first server receives a credential that includes a token for the first component and sends the credential to a second server. The second server sends the credential, a second server identifier, and an identifier for a second component of the second server to the IAM server. The second server receives an acknowledgment from the IAM server and sends the acknowledgment to the first server.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: March 26, 2024
    Assignee: International Business Machines Corporation
    Inventors: Sen Wang, Mei Liu, Si Bo Niu, Wen Yi Gao, Zong Xiong Z X Wang, Guoxiang Zhang, Xiao Yi Tian, Xian Wei Zhang
  • Patent number: 11934832
    Abstract: This application discloses example synchronization instruction insertion methods and example apparatuses. One example method includes obtaining a first program block comprising one or more statements, where each of the one or more statements includes one or more function instructions. A first function instruction and a second function instruction between which data dependency exists in the first program block can then be determined. A synchronization instruction pair between a first statement including the first function instruction and a second statement including the second function instruction can then be inserted.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: March 19, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Xiong Gao, Kun Zhang
  • Publication number: 20230334292
    Abstract: Embodiments of this application disclose a node fusion method for a computational graph and a device. The method includes: converting a neural network into a computational graph; extracting one or more parallelizable branch groups from the computational graph based on a dependency relationship between nodes in the computational graph, where the dependency relationship indicates at least one of the following relationships: the parallelizable branch group has a common parent node, the parallelizable branch group has a common child node, the parallelizable branch group has no parent node, and the parallelizable branch group has no child node; and finally, fusing a plurality of nodes in any parallelizable branch group that respectively belong to different sub-branches to obtain a new computational graph.
    Type: Application
    Filed: June 26, 2023
    Publication date: October 19, 2023
    Inventors: Zhaochuang ZHANG, Xiong GAO, Zitao ZENG
  • Publication number: 20230021472
    Abstract: A method for optimizing a layout of a tensor memory defines at least one hard constraint for allocating a plurality of input/output (I/O) vectors for reading and writing data for a task in the tensor memory. The at least one hard constraint is applied to determine one or more potential conflicts between the plurality of I/O vectors. One or more soft constraints aimed at mitigating the one or more potential conflicts between the I/O vectors may also be generated. The at least one hard constraint is applied in a maximum satisfiability (MaxSAT) solver. The one or more soft constraints may also be applied in the MaxSAT solver. The MaxSAT solver determines locations of the data in the tensor memory. The starting addresses of the input data to be read and of output data to be written by each of the I/O vectors are updated in the tensor memory.
    Type: Application
    Filed: September 28, 2022
    Publication date: January 26, 2023
    Inventors: Anna BULANOVA, Jessica DAVIES, Xiong GAO
  • Publication number: 20220365822
    Abstract: A data processing method implemented by a computer device, includes generating a target task including a buffer application task or a buffer release task, when the target task is the buffer application task, a first buffer corresponding to the buffer application task is used when the second task is executed, or when the target task is the buffer release task, a second buffer corresponding to the buffer release task is used when the first task is executed, obtaining a buffer entry corresponding to the target task after a preceding task of the target task is executed and before a successive task of the target task is executed, where the buffer entry includes a memory size of a buffer corresponding to the target task, a memory location of the buffer, and a memory address of the buffer, and executing the target task to apply for or release the buffer.
    Type: Application
    Filed: August 1, 2022
    Publication date: November 17, 2022
    Inventors: Xiong Gao, Wei Li, Ming Zheng, Hou Fun Lam
  • Patent number: 11422861
    Abstract: A data processing method implemented by a computer device, includes generating a target task including a buffer application task or a buffer release task, when the target task is the buffer application task, a first buffer corresponding to the buffer application task is used when the second task is executed, or when the target task is the buffer release task, a second buffer corresponding to the buffer release task is used when the first task is executed, obtaining a buffer entry corresponding to the target task after a preceding task of the target task is executed and before a successive task of the target task is executed, where the buffer entry includes a memory size of a buffer corresponding to the target task, a memory location of the buffer, and a memory address of the buffer, and executing the target task to apply for or release the buffer.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 23, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xiong Gao, Wei Li, Ming Zheng, Hou Fun Lam
  • Publication number: 20220113971
    Abstract: This application discloses example synchronization instruction insertion methods and example apparatuses. One example method includes obtaining a first program block comprising one or more statements, where each of the one or more statements includes one or more function instructions. A first function instruction and a second function instruction between which data dependency exists in the first program block can then be determined. A synchronization instruction pair between a first statement including the first function instruction and a second statement including the second function instruction can then be inserted.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Xiong GAO, Kun ZHANG
  • Publication number: 20210081249
    Abstract: A data processing method implemented by a computer device, includes generating a target task including a buffer application task or a buffer release task, when the target task is the buffer application task, a first buffer corresponding to the buffer application task is used when the second task is executed, or when the target task is the buffer release task, a second buffer corresponding to the buffer release task is used when the first task is executed, obtaining a buffer entry corresponding to the target task after a preceding task of the target task is executed and before a successive task of the target task is executed, where the buffer entry includes a memory size of a buffer corresponding to the target task, a memory location of the buffer, and a memory address of the buffer, and executing the target task to apply for or release the buffer.
    Type: Application
    Filed: November 30, 2020
    Publication date: March 18, 2021
    Inventors: Xiong Gao, Wei Li, Ming Zheng, Hou Fun Lam
  • Publication number: 20210064425
    Abstract: A task processing method, a processing apparatus, and a computer system are provided. Implementation of the method includes: generating, by a first processing apparatus, a plurality of tasks, and determining task description information of the plurality of tasks, where the task description information of the plurality of tasks is used to indicate a dependency relationship between the plurality of tasks; sending an instruction to a second processing apparatus, where the instruction includes the plurality of tasks and the task description information of the plurality of tasks; and receiving the instruction, and processing the plurality of tasks based on the dependency relationship between the plurality of tasks. The method can effectively reduce a waiting delay, fully exploit a computing capability of an acceleration chip, and improve task processing efficiency.
    Type: Application
    Filed: November 13, 2020
    Publication date: March 4, 2021
    Inventors: Wei Li, Xiong Gao, Hou Fun Lam, Tao Ma
  • Publication number: 20190310874
    Abstract: Embodiments of the present disclosure disclose a driver management method and a host. The method includes: allocating a first hardware device to a target virtual machine on the host; obtaining, a target driver package of the first hardware device from N pre-stored driver packages, where the N driver packages are driver packages of N types of hardware devices, a type of the first hardware device is one of the N types of hardware devices, and N is a positive integer greater than or equal to 1; adding the target driver package into the target virtual machine to enable the target virtual machine to read the target driver package; and installing the target driver package, where a driver obtained by installing the target driver package is used by the target virtual machine to invoke the first hardware device in a hardware pass-through manner.
    Type: Application
    Filed: June 5, 2019
    Publication date: October 10, 2019
    Inventor: Xiong GAO
  • Patent number: 10190798
    Abstract: An air deflector device includes an air deflector provided with rotating shaft inserting channels, and a rotating shaft system. One end of the rotating shaft system is provided with a first connecting part configured to connect with a driving device, and the other end of the rotating shaft system is provided with a second connecting part configured to connect with a case of an air conditioner. The rotating shaft system includes air deflector rotating shafts, and each air deflector rotating shaft includes an inserting plate engageable with the rotating shaft inserting channel. An air conditioner having the air deflector device is also disclosed.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: January 29, 2019
    Assignee: GREE ELECTRIC APPLIANCES, INC. OF ZHUHAI
    Inventors: Heqing Zheng, Wuzhan Ye, Tangtang Gu, Wenwei Ji, Zhihui Liang, Mang Chi, Chao Li, Xiong Gao
  • Publication number: 20180367460
    Abstract: Embodiments of the present disclosure provide a data flow processing method and apparatus, and a system. A processing process performed on a packet is divided into multiple processing actions. Some processing actions are spread only when traffic of a current data flow meets a preset condition. Therefore, multiple processor cores may process a packet in a pipeline manner, so as to improve processing efficiency. When a bandwidth fluctuation amplitude of a data flow is relatively large and a peek bandwidth of the data flow is relatively large, compared with a static pipeline manner, the method provided in the embodiments avoids a waste of processing resources to some extent when traffic is relatively low, and can also better support data flow processing when traffic is relatively high.
    Type: Application
    Filed: August 3, 2018
    Publication date: December 20, 2018
    Inventors: Xiong GAO, Jie WU, Baosong LI
  • Publication number: 20160131393
    Abstract: An air deflector device comprises an air deflector provided with rotating shaft slots, and a rotating shaft system. One end of the rotating shaft system is provided with a first connecting part configured to connect with a driving device, and the other end of the rotating shaft system is provided with a second connecting part configured to connect with a case of an air conditioner. The rotating shaft system comprises air deflector rotating shafts, and each air deflector rotating shaft comprises an inserting plate engageable with the rotating shaft slot. An air conditioner having the air deflector device is also disclosed.
    Type: Application
    Filed: January 15, 2016
    Publication date: May 12, 2016
    Inventors: Heqing Zheng, Wuzhan Ye, Tangtang Gu, Wenwei Ji, Zhihui Liang, Mang Chi, Chao Li, Xiong Gao