Patents by Inventor Haibing Guan
Haibing Guan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11754489Abstract: A pulse baseline value calculation method and a particle counting method of a blood cell analyzer. The said pulse baseline value calculation method, within pulse non-duration time, if an absolute value of a difference value between any two adjacent data of n continuous sampled data is less than a baseline threshold, and the n continuous sampled data are closest to a pulse starting point, an average value of the n continuous sampled data is calculated, and the average value is a pulse baseline value. The present invention has the advantages of setting the baseline threshold and performing comparison, avoiding the sampled data of the baseline where the noise is superimposed, selecting the sampled data with noise or interference within an allowable range for calculation, avoiding accumulating the noise on the final baseline value, making the baseline value be closer to the real data, greatly reducing the erroneous judgment of the baseline value, and making the particle count be more accurate.Type: GrantFiled: March 29, 2017Date of Patent: September 12, 2023Assignee: LEADWAY (HK) LIMITEDInventors: Haibing Guan, Yu Xu
-
Publication number: 20230196121Abstract: A federated learning method, device, and system are provided, to improve robustness of the federated learning system. The method includes: A first client receives a first value of a parameter of a machine learning model from a server, where the first client is one of a plurality of clients; when the first value of the parameter does not meet a first condition, the first client performs a current round of training based on first training data, the machine learning model, and a local value of the parameter, to obtain a training result of the current round of training, where the first training data is data reserved on the first client; and the first client sends the training result and alarm information to the server, where the alarm information indicates that the first value of the parameter does not meet a requirement.Type: ApplicationFiled: February 10, 2023Publication date: June 22, 2023Inventors: Tao SONG, Hanxi GUO, Ruhui MA, Haibing GUAN, Xiulang Jin
-
Patent number: 11599789Abstract: The present invention discloses a hierarchical highly heterogeneous distributed system based deep learning application optimization framework and relates to the field of deep learning in the direction of computational science. The hierarchical highly heterogeneous distributed system based deep learning application optimization framework comprises a running preparation stage and a running stage. The running preparation stage is used for performing deep neural network training. The running stage performs task assignment to all kinds of devices in the distributed system and uses a data encryption module to perform privacy protection to user sensitive data.Type: GrantFiled: August 2, 2018Date of Patent: March 7, 2023Assignee: SHANGHAI JIAO TONG UNIVERSITYInventors: Ruhui Ma, Zongpu Zhang, Tao Song, Yang Hua, Haibing Guan
-
Publication number: 20220350635Abstract: A request-response based paravirtualized I/O system and method relating to the fields of virtualization and cloud computing includes a request-response application, a front-end drive module and a back-end drive module. The front-end drive module and the back-end drive module interact by means of a transmit queue and a receive queue. The request-response application generates an I/O request, and the front-end drive module writes the I/O request into the transmit queue. The system has two operating modes: a notification mode and a polling mode. The system operates by default in the notification mode. When the request-response application issues a connection establishment or service request, the system switches to the polling mode. This system and method introduce an optimistic polling mechanism, combine the advantages of the notification mode and the polling mode, reduce the number of VM exits and decrease wasting of computing resources, thus improving data path performance.Type: ApplicationFiled: June 20, 2022Publication date: November 3, 2022Applicant: SHANGHAI JIAO TONG UNIVERSITYInventors: Jian LI, Xiaokang HU, Ruhui MA, Haibing GUAN
-
Patent number: 11269675Abstract: The present invention finds and defines a problem which possibly exists in an interrupt remapping mechanism under a virtual symmetric multiprocessing environment, i.e., a problem of Interruptability Holder Preemption (IHP). This problem causes the interrupt remapping mechanism to fail and reduces I/O performance of virtual machines. In order to solve the IHP problem, the present invention provides a proactive VCPU comprehensive scheduling method based on interruptability holder information. This method is based on Kernel-Based Virtual Machines (KVMs) which are widely used at present and paravirtualization network models thereof. By globally controlling and analyzing a running state of an interruptability holder and simultaneously considering global scheduling fairness of a system, a VCPU comprehensive scheduling method is established, which can effectively eliminate the IHP problem and obviously improve the I/O performance of the virtual machines.Type: GrantFiled: June 25, 2018Date of Patent: March 8, 2022Assignee: Shanghai Jiao Tong UniversityInventors: Jian Li, Haibing Guan, Xiaokang Hu, Wang Zhang
-
Patent number: 11221880Abstract: The present invention provides an adaptive computing resource allocation approach for virtual network functions, including the following two steps: Step 1: predicting VNFs' real-time computing resource requirements; Step 1.1: offline profiling different types of VNFs, to obtain a parameter relation between the required amount of computing resources and the ingress packet rate; Step 1.2: online monitoring the network traffic information of each VNF, and predicting VNFs' required amount of computing resources with combination of the parameters in Step 1.1; Step 2: reallocating computing resources based on VNFs' resource requirements. The computing resource allocation approach includes a direct allocation approach and an incremental approach. The adaptive computing resource allocation approach for virtual network functions of the present invention allocates computing resources based on VNFs' actual requirements, and remedies performance bottlenecks caused by fair allocation.Type: GrantFiled: July 4, 2017Date of Patent: January 11, 2022Assignee: Shanghai Jiao Tong UniversityInventors: Haibing Guan, Ruhui Ma, Jian Li, Xiaokang Hu
-
Patent number: 11204798Abstract: The method includes the following steps: step 1. obtaining NUMA topology information of a host machine, and monitoring virtual machine performance events by using a kernel PMU; step 2. implementing a greedy algorithm, and a scheduling decision is obtained; step 3. scheduling, according to the scheduling decision, a virtual CPU (VCPU) and a memory of a virtual machine; step 4. after the scheduling of the virtual machine is complete, redirecting to step 1 to continue performing performance monitoring of the virtual machine.Type: GrantFiled: October 18, 2017Date of Patent: December 21, 2021Assignee: Shanghai Jiao Tong UniversityInventors: Haibing Guan, Ruhui Ma, Jian Li, Zhengwei Qi, Junsheng Tan
-
Publication number: 20210350220Abstract: The present invention discloses a hierarchical highly heterogeneous distributed system based deep learning application optimization framework and relates to the field of deep learning in the direction of computational science. The hierarchical highly heterogeneous distributed system based deep learning application optimization framework comprises a running preparation stage and a running stage. The running preparation stage is used for performing deep neural network training. The running stage performs task assignment to all kinds of devices in the distributed system and uses a data encryption module to perform privacy protection to user sensitive data.Type: ApplicationFiled: August 2, 2018Publication date: November 11, 2021Inventors: Ruhui MA, Zongpu ZHANG, Tao SONG, Yang HUA, Haibing GUAN
-
Patent number: 11157327Abstract: The present invention provides a multi-resource scheduling method responding to uncertain demands in a cloud scheduler, where two computation formulas for fairness and efficiency are used as cost functions in an optimization problem. For some change sets with uncertain resource demands, a robust counterpart of an original non-linear optimization problem is computationally tractable. Therefore, the present invention models features of these sets with uncertain resource demands, i.e., establishes an ellipsoidal uncertainty model. In this model, each coefficient vector is put into a hyper-ellipsoidal space and used as a metric to measure an uncertainty degree. With the ellipsoidal uncertainty model, a non-linear optimization problem is solved and a resource allocation solution that can respond to dynamically changing demands can be obtained.Type: GrantFiled: July 13, 2016Date of Patent: October 26, 2021Assignee: SHANGHAI JIAO TONG UNIVERSITYInventors: Jianguo Yao, Ruhui Ma, Xin Xu, Haibing Guan
-
Publication number: 20210224135Abstract: The present invention provides a multi-resource scheduling method responding to uncertain demands in a cloud scheduler, where two computation formulas for fairness and efficiency are used as cost functions in an optimization problem. For some change sets with uncertain resource demands, a robust counterpart of an original non-linear optimization problem is computationally tractable. Therefore, the present invention models features of these sets with uncertain resource demands, i.e., establishes an ellipsoidal uncertainty model. In this model, each coefficient vector is put into a hyper-ellipsoidal space and used as a metric to measure an uncertainty degree. With the ellipsoidal uncertainty model, a non-linear optimization problem is solved and a resource allocation solution that can respond to dynamically changing demands can be obtained.Type: ApplicationFiled: July 13, 2016Publication date: July 22, 2021Applicant: Shanghan Jiao Tong UniversityInventors: Jianguo YAO, Ruhui MA, Xin XU, Haibing GUAN
-
Publication number: 20210149698Abstract: The present invention finds and defines a problem which possibly exists in an interrupt remapping mechanism under a virtual symmetric multiprocessing environment, i.e., a problem of Interruptability Holder Preemption (IHP). This problem causes the interrupt remapping mechanism to fail and reduces I/O performance of virtual machines. In order to solve the IHP problem, the present invention provides a proactive VCPU comprehensive scheduling method based on interruptability holder information. This method is based on Kernel-Based Virtual Machines (KVMs) which are widely used at present and paravirtualization network models thereof. By globally controlling and analyzing a running state of an interruptability holder and simultaneously considering global scheduling fairness of a system, a VCPU comprehensive scheduling method is established, which can effectively eliminate the IHP problem and obviously improve the I/O performance of the virtual machines.Type: ApplicationFiled: June 25, 2018Publication date: May 20, 2021Inventors: Jian LI, Haibing GUAN, Xiaokang HU, Wang ZHANG
-
Patent number: 10922140Abstract: Physical Graphics Processing Unit (GPU) resource scheduling system and method between virtual machines are provided. An agent is inserted between a physical GPU instruction dispatch and a physical GPU interface through a hooking method, for delaying sending instructions and data in the physical GPU instruction dispatch to the physical GPU interface, monitoring a set of GPU conditions of a guest application executing in the virtual machine and a use condition of physical GPU hardware resources, and then providing a feedback to a GPU resource scheduling algorithm based on time or a time sequence. With the agent, it is unneeded for the method to make any modification to the guest application of the virtual machine, a host operating system, a virtual machine operating system, a GPU driver and a virtual machine manager.Type: GrantFiled: June 19, 2013Date of Patent: February 16, 2021Assignee: SHANGHAI JIAOTONG UNIVERSITYInventors: Miao Yu, Zhengwei Qi, Haibing Guan, Yin Wang
-
Patent number: 10853119Abstract: Described herein is a method for resource aggregation (many-to-one virtualization), comprising: virtualizing CPU by QEMU in a distributed way; organizing a plurality of memories scattered over different machines as pages to providing consistent memory view for guest OS; and performing time synchronization between different machines.Type: GrantFiled: December 17, 2018Date of Patent: December 1, 2020Assignee: SHANGHAI JIAO TONG UNIVERSITYInventors: Zhuocheng Ding, Yubin Chen, Jin Zhang, Yun Wang, Weiye Chen, Zhengwei Qi, Haibing Guan
-
Patent number: 10749812Abstract: The present invention relates to Data Center Network (DCN) flow scheduling scheme. It provides a dynamic scheduling algorithm and a hybrid of centralized and decentralized scheduling system to improve the performance of DCN and data parallel application. The scheduling system uses a central controller to collect the real-time bandwidth of each node, and schedule the priority as well as transmission rate of each network flow set combined by application context (Coflow [1]). The centralized scheduling avoids a sophisticated system design and hardware (switch) modification to comparing with full decentralized solutions. The combination of centralization and decentralization decreases the average completion time of Coflows, and eventually improve the performance of data parallel applications.Type: GrantFiled: June 21, 2016Date of Patent: August 18, 2020Assignee: SHANGHAI JIAO TONG UNIVERSITYInventors: Zhouwang Fu, Tao Song, Haibing Guan, Zhengwei Qi, Ruhui Ma, Jianguo Yao
-
Publication number: 20200174817Abstract: Described herein is a method for resource aggregation (many-to-one virtualization), comprising: virtualizing CPU by QEMU in a distributed way; organizing a plurality of memories scattered over different machines as pages to providing consistent memory view for guest OS; and performing time synchronization between different machines.Type: ApplicationFiled: December 17, 2018Publication date: June 4, 2020Inventors: Zhuocheng DING, Yubin CHEN, Jin ZHANG, Yun WANG, Weiye CHEN, Zhengwei QI, Haibing GUAN
-
Publication number: 20200125500Abstract: The present disclosure provides a virtualization method for a device MMU, including: multiplexing a client MMU as a first layer address translation: a client device page table translates a device virtual address into a client physical address; using IOMMU to construct a second layer address translation: IOMMU translates the client physical address into a host physical address through a TO page table of a corresponding device in IOMMU. The virtualization method for a device MMU proposed by the present disclosure can efficiently virtualize the device MMU; successfully combines IOMMU into Mediated Pass-Through, and uses the system IOMMU to perform the second layer address translation, such that the complicated and inefficient Shadow Page Table is abandoned; not only improves the performance of the device MMU under virtualization, but also is simple to implement and completely transparent to the client, and is a universal and efficient solution.Type: ApplicationFiled: September 15, 2017Publication date: April 23, 2020Inventors: Haibing GUAN, Yu XU, Yaozu DONG, Jianguo YAO
-
Publication number: 20200073703Abstract: The present invention discloses an apparatus and a method for virtual machine scheduling in a non-uniform memory access (NUMA) architecture.Type: ApplicationFiled: October 18, 2017Publication date: March 5, 2020Inventors: Haibing GUAN, Ruhui MA, Jina LI, Zhengwei QI, Junsheng TAN
-
Publication number: 20190303203Abstract: The present invention provides an adaptive computing resource allocation approach for virtual network functions, including the following two steps: Step 1: predicting VNFs' real-time computing resource requirements; Step 1.1: offline profiling different types of VNFs, to obtain a parameter relation between the required amount of computing resources and the ingress packet rate; Step 1.2: online monitoring the network traffic information of each VNF, and predicting VNFs' required amount of computing resources with combination of the parameters in Step 1.1; Step 2: reallocating computing resources based on VNFs' resource requirements. The computing resource allocation approach includes a direct allocation approach and an incremental approach. The adaptive computing resource allocation approach for virtual network functions of the present invention allocates computing resources based on VNFs' actual requirements, and remedies performance bottlenecks caused by fair allocation.Type: ApplicationFiled: July 4, 2017Publication date: October 3, 2019Inventors: Haibing GUAN, Ruhui MA, Jian LI, Xiaokang HU
-
Patent number: 10430991Abstract: Described herein is a method for optimizing a scalable GPU virtualization, comprising: providing each vGPU with a private shadow graphics translation table (GTT); copying vGPU's private shadow GTT to physical GTT along with the context switch, wherein the private shadow GTT allows vGPUs to share an overlapped range of a global graphics memory space.Type: GrantFiled: February 8, 2018Date of Patent: October 1, 2019Assignee: Shanghai Jiao Tong UniversityInventors: Jiacheng Ma, Haibing Guan, Zhengwei Qi, Yongbiao Chen
-
Publication number: 20190228557Abstract: Described herein is a method for optimizing a scalable GPU virtualization, comprising: providing each vGPU with a private shadow graphics translation table (GGTT); copying vGPU's private shadow GTT to physical GTT along with the context switch, wherein the private shadow GTT allows vGPUs to share an overlapped range of a global graphics memory space.Type: ApplicationFiled: February 8, 2018Publication date: July 25, 2019Inventors: Jiacheng MA, Haibing GUAN, Zhengwei QI, Yongbiao CHEN