Patents by Inventor Zhengwei Qi

Zhengwei Qi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11204798
    Abstract: The method includes the following steps: step 1. obtaining NUMA topology information of a host machine, and monitoring virtual machine performance events by using a kernel PMU; step 2. implementing a greedy algorithm, and a scheduling decision is obtained; step 3. scheduling, according to the scheduling decision, a virtual CPU (VCPU) and a memory of a virtual machine; step 4. after the scheduling of the virtual machine is complete, redirecting to step 1 to continue performing performance monitoring of the virtual machine.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: December 21, 2021
    Assignee: Shanghai Jiao Tong University
    Inventors: Haibing Guan, Ruhui Ma, Jian Li, Zhengwei Qi, Junsheng Tan
  • Patent number: 10922140
    Abstract: Physical Graphics Processing Unit (GPU) resource scheduling system and method between virtual machines are provided. An agent is inserted between a physical GPU instruction dispatch and a physical GPU interface through a hooking method, for delaying sending instructions and data in the physical GPU instruction dispatch to the physical GPU interface, monitoring a set of GPU conditions of a guest application executing in the virtual machine and a use condition of physical GPU hardware resources, and then providing a feedback to a GPU resource scheduling algorithm based on time or a time sequence. With the agent, it is unneeded for the method to make any modification to the guest application of the virtual machine, a host operating system, a virtual machine operating system, a GPU driver and a virtual machine manager.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: February 16, 2021
    Assignee: SHANGHAI JIAOTONG UNIVERSITY
    Inventors: Miao Yu, Zhengwei Qi, Haibing Guan, Yin Wang
  • Patent number: 10853119
    Abstract: Described herein is a method for resource aggregation (many-to-one virtualization), comprising: virtualizing CPU by QEMU in a distributed way; organizing a plurality of memories scattered over different machines as pages to providing consistent memory view for guest OS; and performing time synchronization between different machines.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: December 1, 2020
    Assignee: SHANGHAI JIAO TONG UNIVERSITY
    Inventors: Zhuocheng Ding, Yubin Chen, Jin Zhang, Yun Wang, Weiye Chen, Zhengwei Qi, Haibing Guan
  • Patent number: 10749812
    Abstract: The present invention relates to Data Center Network (DCN) flow scheduling scheme. It provides a dynamic scheduling algorithm and a hybrid of centralized and decentralized scheduling system to improve the performance of DCN and data parallel application. The scheduling system uses a central controller to collect the real-time bandwidth of each node, and schedule the priority as well as transmission rate of each network flow set combined by application context (Coflow [1]). The centralized scheduling avoids a sophisticated system design and hardware (switch) modification to comparing with full decentralized solutions. The combination of centralization and decentralization decreases the average completion time of Coflows, and eventually improve the performance of data parallel applications.
    Type: Grant
    Filed: June 21, 2016
    Date of Patent: August 18, 2020
    Assignee: SHANGHAI JIAO TONG UNIVERSITY
    Inventors: Zhouwang Fu, Tao Song, Haibing Guan, Zhengwei Qi, Ruhui Ma, Jianguo Yao
  • Publication number: 20200174817
    Abstract: Described herein is a method for resource aggregation (many-to-one virtualization), comprising: virtualizing CPU by QEMU in a distributed way; organizing a plurality of memories scattered over different machines as pages to providing consistent memory view for guest OS; and performing time synchronization between different machines.
    Type: Application
    Filed: December 17, 2018
    Publication date: June 4, 2020
    Inventors: Zhuocheng DING, Yubin CHEN, Jin ZHANG, Yun WANG, Weiye CHEN, Zhengwei QI, Haibing GUAN
  • Publication number: 20200073703
    Abstract: The present invention discloses an apparatus and a method for virtual machine scheduling in a non-uniform memory access (NUMA) architecture.
    Type: Application
    Filed: October 18, 2017
    Publication date: March 5, 2020
    Inventors: Haibing GUAN, Ruhui MA, Jina LI, Zhengwei QI, Junsheng TAN
  • Patent number: 10430991
    Abstract: Described herein is a method for optimizing a scalable GPU virtualization, comprising: providing each vGPU with a private shadow graphics translation table (GTT); copying vGPU's private shadow GTT to physical GTT along with the context switch, wherein the private shadow GTT allows vGPUs to share an overlapped range of a global graphics memory space.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: October 1, 2019
    Assignee: Shanghai Jiao Tong University
    Inventors: Jiacheng Ma, Haibing Guan, Zhengwei Qi, Yongbiao Chen
  • Publication number: 20190228557
    Abstract: Described herein is a method for optimizing a scalable GPU virtualization, comprising: providing each vGPU with a private shadow graphics translation table (GGTT); copying vGPU's private shadow GTT to physical GTT along with the context switch, wherein the private shadow GTT allows vGPUs to share an overlapped range of a global graphics memory space.
    Type: Application
    Filed: February 8, 2018
    Publication date: July 25, 2019
    Inventors: Jiacheng MA, Haibing GUAN, Zhengwei QI, Yongbiao CHEN
  • Publication number: 20190089645
    Abstract: The present invention relates to Data Center Network (DCN) flow scheduling scheme. It provides a dynamic scheduling algorithm and a hybrid of centralized and decentralized scheduling system to improve the performance of DCN and data parallel application. The scheduling system uses a central controller to collect the real-time bandwidth of each node, and schedule the priority as well as transmission rate of each network flow set combined by application context (Coflow [1]). The centralized scheduling avoids a sophisticated system design and hardware (switch) modification to comparing with full decentralized solutions. The combination of centralization and decentralization decreases the average completion time of Coflows, and eventually improve the performance of data parallel applications.
    Type: Application
    Filed: June 21, 2016
    Publication date: March 21, 2019
    Inventors: Zhouwang Fu, Tao Song, Haibing Guan, Zhengwei Qi, Ruhui Ma, Jianguo Yao
  • Publication number: 20180246770
    Abstract: Physical Graphics Processing Unit (GPU) resource scheduling system and method based on an instant effect feedback of a guest application and between virtual machines are provided. An agent is inserted between a host physical GPU HostOps dispatch and a host physical GPU guest application interface through a hooking method, for delaying sending instructions and data in the host physical GPU HostOps dispatch, monitoring a relevant display performance condition of a GPU guest application in the virtual machine and a use condition of physical GPU resources, and providing a feedback to any GPU resource scheduling algorithm based on time or a time sequence. With the agent, it is unneeded for the method to make any modification to a virtual machine guest application, a host operating system, a virtual machine operating system, a GPU drive and a virtual machine manager. The present invention does not need to stop a machine operation.
    Type: Application
    Filed: June 19, 2013
    Publication date: August 30, 2018
    Inventors: Miao Yu, Zhengwei Qi, Haibing Guan, Yin Wang
  • Publication number: 20160323427
    Abstract: The present invention provides a dual-machine hot standby disaster tolerance system for network service in virtualized environment. The system comprises a main server and a standby server, and the main server and the standby server are connected via network; a main VM runs on the main server; a standby VM runs on the standby server; the standby VM is in the alternative state of the application layer semantics of the main VM; the alternative state of the application layer semantics means that the standby VM can serve instead of the main server in view of the application layer semantics, and generate the correct output for any client request. The outputs of the main VM and standby VM are compared according to the alternative rule in order to determine whether a backup is needed, therefore efficiently reducing the backup frequency, and improving the system performance on the basis of ensuring rapid recovery; the present invention greatly reduces the system overhead and increases the system throughput.
    Type: Application
    Filed: July 28, 2014
    Publication date: November 3, 2016
    Inventors: Haibing Guan, Ruhui Ma, Jian Li, Zhengwei Qi, Zhengyu Qian
  • Patent number: 9286127
    Abstract: The present invention discloses a method for allocating processor resources precisely by means of predictive scheduling based on current credits, wherein the run queue of the Credit scheduler comprises virtual central processing units (VCPUs) with UNDER priority located at the head of the queue, VCPUs with OVER priority, VCPUs with IDLE priority located at the end of the queue and a wait queue for saving all VCPUs with overdrawn credits. Based on credit values of VCPUs, the method predicts the time of the credit overdrawing, and sets a timer which is triggered after the time to notify the Credit scheduler to stop scheduling corresponding VCPU. Thus the method effectively controls credit consumption and achieves the object of precise allocation of processor resources. The method is suitable to multi-core environment, and is also capable of reserving the advantages of the existing Credit scheduler, which are quick response for small task loads and load balancing.
    Type: Grant
    Filed: July 5, 2013
    Date of Patent: March 15, 2016
    Assignee: Shanghai Jiao Tong University
    Inventors: Haibing Guan, Jian Li, Ruhui Ma, Zhengwei Qi, Shuangshuai Jia
  • Publication number: 20150339170
    Abstract: The present invention discloses a method for allocating processor resources precisely by means of predictive scheduling based on current credits, wherein the run queue of the Credit scheduler comprises VCPUs with UNDER priority located at the head of the queue, VCPUs with OVER priority, VCPUs with IDLE priority located at the end of the queue and a wait queue for saving all VCPUs with overdrawn credits. Based on credit values of VCPUs, the method predicts the time of the credit overdrawing, and sets a timer which is triggered after the time to notify the Credit scheduler to stop scheduling corresponding VCPU. Thus the method effectively controls credit consumption and achieves the object of precise allocation of processor resources. The method is suitable to multi-core environment, and is also capable of reserving the advantages of the existing Credit scheduler, which are quick response for small task loads and load balancing.
    Type: Application
    Filed: July 5, 2013
    Publication date: November 26, 2015
    Inventors: Haibing Guan, Jian Li, Ruhui Ma, Zhengwei Qi, Shuangshuai Jia