Patents by Inventor Haoqiang Zheng

Haoqiang Zheng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9785460
    Abstract: A technique is described for managing processor (CPU) resources in a host having virtual machines (VMs) executed thereon. A target size of a VM is determined based on its demand and CPU entitlement. If the VM's current size exceeds the target size, the technique dynamically changes the size of a VM in the host by increasing or decreasing the number of virtual CPUs available to the VM. To “deactivate” virtual CPUs, a high-priority balloon thread is launched and pinned to one of the virtual CPUs targeted for deactivation, and the underlying hypervisor deschedules execution of the virtual CPU accordingly. To “activate” virtual CPUs and increase the number of virtual CPUs available to the VM, the launched balloon thread may be killed.
    Type: Grant
    Filed: May 3, 2013
    Date of Patent: October 10, 2017
    Assignee: VMware, Inc.
    Inventor: Haoqiang Zheng
  • Publication number: 20170249186
    Abstract: A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of of the process corresponding to the container are then executed on the corresponding physical CPU.
    Type: Application
    Filed: May 11, 2017
    Publication date: August 31, 2017
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Publication number: 20170220381
    Abstract: Techniques for implicit coscheduling of CPUs to improve corun performance of scheduled contexts are described. One technique minimizes skew by implementing corun migrations, and another technique minimizes skew by implementing a corun bonus mechanism. Skew between schedulable contexts may be calculated based on guest progress, where guest progress represents time spent executing guest operating system and guest application code. A non-linear skew catch-up algorithm is described that adjusts the progress of a context when the progress falls far behind its sibling contexts.
    Type: Application
    Filed: April 21, 2017
    Publication date: August 3, 2017
    Inventors: Haoqiang ZHENG, Carl A. WALDSPURGER
  • Patent number: 9703589
    Abstract: A host computer has a plurality of containers including a first container executing therein, where the host also includes a physical network interface controller (NIC). A packet handling interrupt is detected upon receipt of a first data packet associated with the first container If the first virtual machine is latency sensitive, then the packet handling interrupt is processed. If the first virtual machine is not latency sensitive, then the first data packet is queued and processing of the packet handling interrupt is delayed.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: July 11, 2017
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Lenin Singaravelu, Shilpi Agarwal, Daniel Michael Hecht, Garrett Smith
  • Patent number: 9652298
    Abstract: Systems and techniques are described for power-aware scheduling. One of the techniques includes monitoring execution of a plurality of groups of software threads executing on a physical machine, wherein the physical machine comprises a physical hardware platform that includes a plurality of processor packages having a plurality of package power states, wherein the plurality of package power states includes an independent package power state; obtaining a respective independent power state measure for each of the processor packages, wherein the independent power state measure provides a measure of a percentage of time the processor package spends in the independent package power state; and adjusting an allocation of the plurality of groups of software threads across the plurality of processor packages based in part on the independent power state measures for the packages.
    Type: Grant
    Filed: January 29, 2014
    Date of Patent: May 16, 2017
    Assignee: VMware, Inc.
    Inventors: Qasim Ali, Timothy P. Mann, Haoqiang Zheng
  • Patent number: 9652280
    Abstract: A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU.
    Type: Grant
    Filed: February 15, 2016
    Date of Patent: May 16, 2017
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Lenin Singaravelu, Shilpi Agarwal, Daniel Michael Hecht, Garrett Smith
  • Patent number: 9632808
    Abstract: Techniques for implicit coscheduling of CPUs to improve corun performance of scheduled contexts are described. One technique minimizes skew by implementing corun migrations, and another technique minimizes skew by implementing a corun bonus mechanism. Skew between schedulable contexts may be calculated based on guest progress, where guest progress represents time spent executing guest operating system and guest application code. A non-linear skew catch-up algorithm is described that adjusts the progress of a context when the progress falls far behind its sibling contexts.
    Type: Grant
    Filed: May 8, 2014
    Date of Patent: April 25, 2017
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Carl A. Waldspurger
  • Patent number: 9552216
    Abstract: A host computer has a plurality of virtual machines executing therein under the control of a hypervisor, where the host also includes a physical network interface controller (NIC). An interrupt controller detects an interrupt generated by the physical NIC, where the interrupt corresponds to a virtual machine. If the virtual machine has exclusive affinity to one or more physical central processing units (CPUs), then the interrupt is forwarded to the virtual machine. If the virtual machine does not have exclusive affinity, then a process in the hypervisor is invoked to forward the interrupt to the virtual machine.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: January 24, 2017
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Lenin Singaravelu, Shilpi Agarwal, Daniel Michael Hecht, Garrett Smith
  • Publication number: 20160224370
    Abstract: A host computer has a virtualization software that supports execution of a plurality of virtual machines, where the virtualization software includes a virtual machine monitor for each of the virtual machines, and where each virtual machine monitor emulates a virtual central processing unit (CPU) for a corresponding virtual machine. A virtual machine monitor halts execution of a virtual CPU of a virtual machine by receiving a first halt instruction from a corresponding virtual machine and determining whether the virtual machine is latency sensitive. If the virtual machine is latency sensitive, then a second halt instruction is issued from the virtual machine monitor to halt a physical CPU on which the virtual CPU executes. If the virtual machine is not latency sensitive, then a system call to a kernel executing on the host computer is executed to indicate to the kernel that the virtual CPU is in an idle state.
    Type: Application
    Filed: April 12, 2016
    Publication date: August 4, 2016
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Patent number: 9396024
    Abstract: Methods, computer programs, and systems for managing thread performance in a computing environment based on cache occupancy are provided. In one embodiment, a computer implemented method assigns a thread performance counter to threads being created to measure the number of cache misses for the threads. The thread performance counter is deduced in one embodiment based on performance counters associated with each core in a processor. The method further calculates a self-thread value as the change in the thread performance counter of a given thread during a predetermined period, and an other-thread value as the sum of all the changes in the thread performance counters for all threads except for the given thread. Further, the method estimates a cache occupancy for the given thread based on a previous occupancy for the given thread, and the calculated shelf-thread and other-thread values. The estimated cache occupancy is used to assign computing environment resources to the given thread.
    Type: Grant
    Filed: October 14, 2008
    Date of Patent: July 19, 2016
    Assignee: VMware, Inc.
    Inventors: Richard West, Puneet Zaroo, Carl A. Waldspurger, Xiao Zhang, Haoqiang Zheng
  • Publication number: 20160162336
    Abstract: A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU.
    Type: Application
    Filed: February 15, 2016
    Publication date: June 9, 2016
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Patent number: 9317318
    Abstract: A host computer has a virtualization software that supports execution of a plurality of virtual machines, where the virtualization software includes a virtual machine monitor for each of the virtual machines, and where each virtual machine monitor emulates a virtual central processing unit (CPU) for a corresponding virtual machine. A virtual machine monitor halts execution of a virtual CPU of a virtual machine by receiving a first halt instruction from a corresponding virtual machine and determining whether the virtual machine is latency sensitive. If the virtual machine is latency sensitive, then a second halt instruction is issued from the virtual machine monitor to halt a physical CPU on which the virtual CPU executes. If the virtual machine is not latency sensitive, then a system call to a kernel executing on the host computer is executed to indicate to the kernel that the virtual CPU is in an idle state.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: April 19, 2016
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Lenin Singaravelu, Shilpi Agarwal, Daniel Michael Hecht, Garrett Smith
  • Publication number: 20160085571
    Abstract: Examples perform selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. Some examples contemplate monitoring system characteristics and rescheduling the vCPUs when other placements may provide improved performance and/or efficiency.
    Type: Application
    Filed: September 21, 2014
    Publication date: March 24, 2016
    Inventors: Seongbeom Kim, Haoqiang Zheng, Rajesh Venkatasubramanian, Puneet Zaroo
  • Patent number: 9262198
    Abstract: A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU.
    Type: Grant
    Filed: August 25, 2014
    Date of Patent: February 16, 2016
    Assignee: VMware, Inc.
    Inventors: Haoqiang Zheng, Lenin Singaravelu, Shilpi Agarwal, Daniel Michael Hecht, Garrett Smith
  • Publication number: 20150212860
    Abstract: Systems and techniques are described for power-aware scheduling. One of the techniques includes monitoring execution of a plurality of groups of software threads executing on a physical machine, wherein the physical machine comprises a physical hardware platform that includes a plurality of processor packages having a plurality of package power states, wherein the plurality of package power states includes an independent package power state; obtaining a respective independent power state measure for each of the processor packages, wherein the independent power state measure provides a measure of a percentage of time the processor package spends in the independent package power state; and adjusting an allocation of the plurality of groups of software threads across the plurality of processor packages based in part on the independent power state measures for the packages.
    Type: Application
    Filed: January 29, 2014
    Publication date: July 30, 2015
    Applicant: VMware, Inc.
    Inventors: Qasim Ali, Timothy P. Mann, Haoqiang Zheng
  • Publication number: 20150058861
    Abstract: A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU.
    Type: Application
    Filed: August 25, 2014
    Publication date: February 26, 2015
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Publication number: 20150058846
    Abstract: A host computer has a virtualization software that supports execution of a plurality of virtual machines, where the virtualization software includes a virtual machine monitor for each of the virtual machines, and where each virtual machine monitor emulates a virtual central processing unit (CPU) for a corresponding virtual machine. A virtual machine monitor halts execution of a virtual CPU of a virtual machine by receiving a first halt instruction from a corresponding virtual machine and determining whether the virtual machine is latency sensitive. If the virtual machine is latency sensitive, then a second halt instruction is issued from the virtual machine monitor to halt a physical CPU on which the virtual CPU executes. If the virtual machine is not latency sensitive, then a system call to a kernel executing on the host computer is executed to indicate to the kernel that the virtual CPU is in an idle state.
    Type: Application
    Filed: August 25, 2014
    Publication date: February 26, 2015
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Publication number: 20150055499
    Abstract: A host computer has a plurality of containers including a first container executing therein, where the host also includes a physical network interface controller (NIC). A packet handling interrupt is detected upon receipt of a first data packet associated with the first container. If the first virtual machine is latency sensitive, then the packet handling interrupt is processed. If the first virtual machine is not latency sensitive, then the first data packet is queued and processing of the packet handling interrupt is delayed.
    Type: Application
    Filed: August 25, 2014
    Publication date: February 26, 2015
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Publication number: 20150058847
    Abstract: A host computer has a plurality of virtual machines executing therein under the control of a hypervisor, where the host also includes a physical network interface controller (NIC). An interrupt controller detects an interrupt generated by the physical NIC, where the interrupt corresponds to a virtual machine. If the virtual machine has exclusive affinity to one or more physical central processing units (CPUs), then the interrupt is forwarded to the virtual machine. If the virtual machine does not have exclusive affinity, then a process in the hypervisor is invoked to forward the interrupt to the virtual machine.
    Type: Application
    Filed: August 25, 2014
    Publication date: February 26, 2015
    Inventors: Haoqiang ZHENG, Lenin SINGARAVELU, Shilpi AGARWAL, Daniel Michael HECHT, Garrett SMITH
  • Publication number: 20140331222
    Abstract: A technique is described for managing processor (CPU) resources in a host having virtual machines (VMs) executed thereon. A target size of a VM is determined based on its demand and CPU entitlement. If the VM's current size exceeds the target size, the technique dynamically changes the size of a VM in the host by increasing or decreasing the number of virtual CPUs available to the VM. To “deactivate” virtual CPUs, a high-priority balloon thread is launched and pinned to one of the virtual CPUs targeted for deactivation, and the underlying hypervisor deschedules execution of the virtual CPU accordingly. To “activate” virtual CPUs, the number of virtual CPUs, the launched balloon thread may be killed.
    Type: Application
    Filed: May 3, 2013
    Publication date: November 6, 2014
    Applicant: VMWARE, INC.
    Inventor: Haoqiang ZHENG