Patents by Inventor Xunjia LU

Xunjia LU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240385869
    Abstract: Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology.
    Type: Application
    Filed: July 29, 2024
    Publication date: November 21, 2024
    Inventors: Xunjia Lu, Bi Wu, Petr Vandrovec, Haoqiang Zheng
  • Patent number: 12086622
    Abstract: Techniques for optimizing virtual machine (VM) scheduling on a non-uniform cache access (NUCA) system are provided. In one set of embodiments, a hypervisor of the NUCA system can partition the virtual CPUs of each VM running on the system into logical constructs referred to as last level cache (LLC) groups, where each LLC group is sized to match (or at least not exceed) the LLC domain size of the system. The hypervisor can then place/load balance the virtual CPUs of each VM on the system's cores in a manner that attempts to keep virtual CPUs which are part of the same LLC group within the same LLC domain, subject to various factors such as compute load, cache contention, and so on.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: September 10, 2024
    Assignee: VMware LLC
    Inventors: Xunjia Lu, Haoqiang Zheng, Yifan Hao
  • Patent number: 12050927
    Abstract: Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: July 30, 2024
    Assignee: VMware LLC
    Inventors: Xunjia Lu, Bi Wu, Petr Vandrovec, Haoqiang Zheng
  • Publication number: 20240192976
    Abstract: Various approaches for exposing a virtual Non-Uniform Memory Access (NUMA) locality table to the guest OS of a VM running on NUMA system are provided. These approaches provide different tradeoffs between the accuracy of the virtual NUMA locality table and the ability of the system's hypervisor to migrate virtual NUMA nodes, with the general goal of enabling the guest OS to make more informed task placement/memory allocation decisions.
    Type: Application
    Filed: February 20, 2024
    Publication date: June 13, 2024
    Inventors: Timothy Merrifield, Petr Vandrovec, Xunjia Lu, James White
  • Patent number: 11941422
    Abstract: Various approaches for exposing a virtual Non-Uniform Memory Access (NUMA) locality table to the guest OS of a VM running on NUMA system are provided. These approaches provide different tradeoffs between the accuracy of the virtual NUMA locality table and the ability of the system's hypervisor to migrate virtual NUMA nodes, with the general goal of enabling the guest OS to make more informed task placement/memory allocation decisions.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: March 26, 2024
    Assignee: VMware LLC
    Inventors: Timothy Merrifield, Petr Vandrovec, Xunjia Lu, James White
  • Patent number: 11934890
    Abstract: An example method of managing exclusive affinity for threads executing in a virtualized computing system includes: determining, by an exclusive affinity monitor executing in a hypervisor of the virtualized computing system, a set of threads eligible for exclusive affinity; determining, by the exclusive affinity monitor, for each thread in the set of threads, impact on performance of the threads for granting each thread exclusive affinity; and granting, for each thread of the set of threads having an impact on performance of the threads less than a threshold, exclusive affinity to respective physical central processing units (PCPUs) of the virtualized computing system.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: March 19, 2024
    Assignee: VMware LLC
    Inventors: Haoqiang Zheng, Xunjia Lu
  • Patent number: 11928502
    Abstract: Some embodiments provide a method for scheduling networking threads associated with a data compute node (DCN) executing at a host computer. When a virtual networking device is instantiated for the DCN, the method assigns the virtual networking device to a particular non-uniform memory access (NUMA) node of multiple NUMA nodes associated with the DCN. Based on the assignment of the virtual networking device to the particular NUMA node, the method assigns networking threads associated with the DCN to the same particular NUMA node and provides information to the DCN regarding the particular NUMA node in order for the DCN to assign a thread associated with an application executing on the DCN to the same particular NUMA node.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: March 12, 2024
    Assignee: VMware LLC
    Inventors: Rishi Mehta, Boon S. Ang, Petr Vandrovec, Xunjia Lu
  • Publication number: 20240028361
    Abstract: An example method of virtualized cache allocation for a virtualized computing system includes: providing, by a hypervisor for a virtual machine (VM), a virtual shared cache, the virtual shared cache backed by a physical shared cache of a processor; providing, by the hypervisor to the VM, virtual service classes and virtual service class bit masks; mapping, by the hypervisor, the virtual service classes to physical service classes of the processor; associating, by the hypervisor, a shift factor with the virtual service class bit masks with respect to physical service class bit masks of the processor; and configuring, by the hypervisor, service class registers and service class bit mask registers of the processor based on the mapping and the shift factor in response to configuration of the virtual shared cache by the VM.
    Type: Application
    Filed: July 20, 2022
    Publication date: January 25, 2024
    Inventors: Phani Kishore GADEPALLI, Xunjia LU, James Kenneth WHITE, Sam SCALISE
  • Publication number: 20230289207
    Abstract: Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology.
    Type: Application
    Filed: May 15, 2023
    Publication date: September 14, 2023
    Inventors: Xunjia Lu, Bi Wu, Petr Vandrovec, Haoqiang Zheng
  • Patent number: 11687356
    Abstract: Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 27, 2023
    Assignee: VMware, Inc.
    Inventors: Xunjia Lu, Bi Wu, Petr Vandrovec, Haoqiang Zheng
  • Patent number: 11656914
    Abstract: Disclosed are various approaches to anticipating future resource consumption based on user sessions. A message comprising a prediction of a future number of concurrent user sessions to be hosted by a virtual machine within a predefined future interval of time is received. It is then determined whether the future number of concurrent user sessions will cause the virtual machine to cross a predefined resource threshold during the predefined future interval of time. Then, a message is sent to a first hypervisor hosting the virtual machine to migrate the virtual machine to a second hypervisor.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: May 23, 2023
    Assignee: VMWARE, INC.
    Inventors: Yao Zhang, Olivier Alain Cremel, Zhelong Pan, Xunjia Lu
  • Publication number: 20230036017
    Abstract: An example method of determining size of virtual last-level cache (LLC) exposed to a virtual machine (VM) supported by a hypervisor executing on a host computer includes: obtaining, by the hypervisor, a host topology of the host computer, the host topology including a number of LLCs in a central processing unit (CPU) of the host computer and a host LLC size being a size of each of the LLCs in the CPU; obtaining, by the hypervisor, a virtual socket size for a virtual socket presented to the VM by the hypervisor and a virtual non-uniform memory access (NUMA) node size presented to the VM by the hypervisor; determining, by the hypervisor, a virtual LLC size for the VM based on the host topology, the virtual socket size, the virtual NUMA node size, and a plurality of constraints; and presenting, to the VM, the virtual LLC size in processor feature discovery information.
    Type: Application
    Filed: July 21, 2021
    Publication date: February 2, 2023
    Inventors: Xunjia LU, Yifan HAO, Sam SCALISE
  • Publication number: 20230026837
    Abstract: Techniques for optimizing virtual machine (VM) scheduling on a non-uniform cache access (NUCA) system are provided. In one set of embodiments, a hypervisor of the NUCA system can partition the virtual CPUs of each VM running on the system into logical constructs referred to as last level cache (LLC) groups, where each LLC group is sized to match (or at least not exceed) the LLC domain size of the system. The hypervisor can then place/load balance the virtual CPUs of each VM on the system’s cores in a manner that attempts to keep virtual CPUs which are part of the same LLC group within the same LLC domain, subject to various factors such as compute load, cache contention, and so on.
    Type: Application
    Filed: July 23, 2021
    Publication date: January 26, 2023
    Inventors: Xunjia Lu, Haoqiang Zheng, Yifan Hao
  • Publication number: 20230012606
    Abstract: Various approaches for exposing a virtual Non-Uniform Memory Access (NUMA) locality table to the guest OS of a VM running on NUMA system are provided. These approaches provide different tradeoffs between the accuracy of the virtual NUMA locality table and the ability of the system's hypervisor to migrate virtual NUMA nodes, with the general goal of enabling the guest OS to make more informed task placement/memory allocation decisions.
    Type: Application
    Filed: July 14, 2021
    Publication date: January 19, 2023
    Inventors: Timothy Merrifield, Petr Vandrovec, Xunjia Lu, James White
  • Publication number: 20230015852
    Abstract: An example method of managing exclusive affinity for threads executing in a virtualized computing system includes: determining, by an exclusive affinity monitor executing in a hypervisor of the virtualized computing system, a set of threads eligible for exclusive affinity; determining, by the exclusive affinity monitor, for each thread in the set of threads, impact on performance of the threads for granting each thread exclusive affinity; and granting, for each thread of the set of threads having an impact on performance of the threads less than a threshold, exclusive affinity to respective physical central processing units (PCPUs) of the virtualized computing system.
    Type: Application
    Filed: July 16, 2021
    Publication date: January 19, 2023
    Inventors: Haoqiang ZHENG, Xunjia LU
  • Publication number: 20220350647
    Abstract: Some embodiments provide a method for scheduling networking threads associated with a data compute node (DCN) executing at a host computer. When a virtual networking device is instantiated for the DCN, the method assigns the virtual networking device to a particular non-uniform memory access (NUMA) node of multiple NUMA nodes associated with the DCN. Based on the assignment of the virtual networking device to the particular NUMA node, the method assigns networking threads associated with the DCN to the same particular NUMA node and provides information to the DCN regarding the particular NUMA node in order for the DCN to assign a thread associated with an application executing on the DCN to the same particular NUMA node.
    Type: Application
    Filed: April 29, 2021
    Publication date: November 3, 2022
    Inventors: Rishi Mehta, Boon S. Ang, Petr Vandrovec, Xunjia Lu
  • Patent number: 11429424
    Abstract: A method of selectively assigning virtual CPUs (vCPUs) of a virtual machine (VM) to physical CPUs (pCPUs), where execution of the VM is supported by a hypervisor running on a hardware platform including the pCPUs, includes determining that a first vCPU of the vCPUs is scheduled to execute a latency-sensitive workload of the VM and a second vCPU of the vCPUs is scheduled to execute a non-latency-sensitive workload of the VM and assigning the first vCPU to a first pCPU of the pCPUs and the second vCPU to a second pCPU of the pCPUs. A kernel component of the hypervisor pins the assignment of the first vCPU to the first pCPU and does not pin the assignment of the second vCPU to the second pCPU. The method further comprises selectively tagging or not tagging by a user or an automated tool, a plurality of workloads of the VM as latency-sensitive.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: August 30, 2022
    Assignee: VMware, Inc.
    Inventors: Xunjia Lu, Haoqiang Zheng
  • Publication number: 20220075637
    Abstract: Techniques for concurrently supporting virtual non-uniform memory access (virtual NUMA) and CPU/memory hot-add in a virtual machine (VM) are provided. In one set of embodiments, a hypervisor of a host system can compute a node size for a virtual NUMA topology of the VM, where the node size indicates a maximum number of virtual central processing units (vCPUs) and a maximum amount of memory to be included in each virtual NUMA node. The hypervisor can further build and expose the virtual NUMA topology to the VM. Then, at a time of receiving a request to hot-add a new vCPU or memory region to the VM, the hypervisor can check whether all existing nodes in the virtual NUMA topology have reached the maximum number of vCPUs or maximum amount of memory, per the computed node size. If so, the hypervisor can create a new node with the new vCPU or memory region and add the new node to the virtual NUMA topology.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Xunjia Lu, Bi Wu, Petr Vandrovec, Haoqiang Zheng
  • Publication number: 20220027183
    Abstract: A method of selectively assigning virtual CPUs (vCPUs) of a virtual machine (VM) to physical CPUs (pCPUs), where execution of the VM is supported by a hypervisor running on a hardware platform including the pCPUs, includes determining that a first vCPU of the vCPUs is scheduled to execute a latency-sensitive workload of the VM and a second vCPU of the vCPUs is scheduled to execute a non-latency-sensitive workload of the VM and assigning the first vCPU to a first pCPU of the pCPUs and the second vCPU to a second pCPU of the pCPUs. A kernel component of the hypervisor pins the assignment of the first vCPU to the first pCPU and does not pin the assignment of the second vCPU to the second pCPU. The method further comprises selectively tagging or not tagging by a user or an automated tool, a plurality of workloads of the VM as latency-sensitive.
    Type: Application
    Filed: July 22, 2020
    Publication date: January 27, 2022
    Inventors: Xunjia LU, Haoqiang ZHENG
  • Patent number: 11182183
    Abstract: Disclosed are various embodiments that utilize conflict cost for workload placements in datacenter environments. In some examples, a protected memory level is identified for a computing environment. The computing environment includes a number of processor resources. Incompatible processor workloads are prohibited from concurrently executing on parallel processor resources. Parallel processor resources share memory at the protected memory level. A number of conflict costs are determined for a processor workload. Each conflict cost is determined based on a measure of compatibility between the processor workload and a parallel processor resource that shares a particular memory with the respective processor resource. The processor workload is assigned to execute on a processor resource associated with a minimum conflict cost.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: November 23, 2021
    Assignee: VMWARE, INC.
    Inventors: Xunjia Lu, Haoqiang Zheng, David Dunn, Fred Jacobs