Patents by Inventor Lan Vu
Lan Vu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240036937Abstract: Disclosed are aspects of workload selection and placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some aspects, workloads are assigned to virtual graphics processing unit (vGPU)-enabled graphics processing units (GPUs). A number of vGPU placement neural networks are trained to maximize a composite efficiency metric based on workload data and GPU data for the plurality of vGPU placement models. A combined neural network selector is generated using the vGPU placement neural networks, and utilized to assign a workload to a vGPU-enabled GPU.Type: ApplicationFiled: October 9, 2023Publication date: February 1, 2024Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu
-
Patent number: 11886898Abstract: Various aspects are disclosed for graphics processing unit (GPU)-remoting latency aware migration. In some aspects, a host executes a GPU-remoting client that includes a GPU workload. GPU-remoting latencies are identified for hosts of a cluster. A destination host is identified based on having a lower GPU-remoting latency than the host currently executing the GPU-remoting client. The GPU-remoting client is migrated from its current host to the destination host.Type: GrantFiled: March 30, 2020Date of Patent: January 30, 2024Assignee: VMware, Inc.Inventors: Lan Vu, Uday Pundalik Kurkure, Hari Sivaraman
-
Publication number: 20240015107Abstract: Disclosed are various embodiments for rate proportional scheduling to reduce packet loss in virtualized network function chains. A congestion monitor executed by a first virtual machine executed by a host computing device can detect congestion in a receive queue associated with a first virtualized network function implemented by a first virtual machine. The congestion monitor can send a pause signal to a rate controller executed by a second virtual machine executed by the host computing device. The rate controller can receive the pause signal. In response, the rate controller can pause the processing of packets by a second virtualized network function implemented by the second virtual machine to reduce congestion in the receive queue of the first virtualized network function.Type: ApplicationFiled: October 27, 2022Publication date: January 11, 2024Inventors: AVINASH KUMAR CHAURASIA, LAN VU, UDAY PUNDALIK KURKURE, HARI SIVARAMAN, SAIRAM VEERASWAMY
-
Patent number: 11816509Abstract: Disclosed are aspects of workload selection and placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some aspects, workloads are assigned to virtual graphics processing unit (vGPU)-enabled graphics processing units (GPUs) based on a variety of vGPU placement models. A number of vGPU placement neural networks are trained to maximize a composite efficiency metric based on workload data and GPU data for the plurality of vGPU placement models. A combined neural network selector is generated using the vGPU placement neural networks, and utilized to assign a workload to a vGPU-enabled GPU.Type: GrantFiled: January 14, 2020Date of Patent: November 14, 2023Assignee: VMWARE, INC.Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu
-
Patent number: 11720408Abstract: Disclosed are aspects of task assignment for systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some examples, an algorithm is determined based on predetermined virtual machine assignment algorithms. The algorithm optimizes for a predetermined cost function. A virtual machine is queued in an arrival queue for assignment. A graphics configuration of a system is determined. The graphics configuration specifies a number of graphics processing units (GPUs) in the system. The system includes a vGPU enabled GPU. The algorithm is selected based on a correlation between the algorithm and the graphics configuration of the system. The virtual machine is assigned to a run queue based on the selected algorithm.Type: GrantFiled: April 24, 2019Date of Patent: August 8, 2023Assignee: VMWARE, INC.Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu, Anshuj Garg
-
Patent number: 11722464Abstract: A method for symmetric authentication is provided. This method includes generating a first challenge message containing a first string; encrypting the first challenge message; transmitting the encrypted first challenge message to a second device; receiving a first answer message from the second device; decrypting the first answer message; and authenticating the second device based on determining the decrypted first answer message contains the first string. Upon successful authentication of the second device, the method further includes receiving an encrypted second challenge message from the second device; decrypting the encrypted second challenge message; generating a second answer message containing a second string; encrypting the second answer message; and transmitting the encrypted second answer message to the second device.Type: GrantFiled: February 28, 2019Date of Patent: August 8, 2023Assignee: VMWARE, INC.Inventors: Hari Sivaraman, Uday Kurkure, Lan Vu, Vijayaraghavan Soundararajan
-
Patent number: 11586842Abstract: A system and method for assessing video quality of a video-based application trains a neural network using training data of video samples and assesses video of the video-based application using the neural network to generate the subjective video quality information of the video-based application. Data augmentation is performed on video data, which is labeled with at least one subjective quality level, to generate the training data of video samples.Type: GrantFiled: March 18, 2020Date of Patent: February 21, 2023Assignee: VMWARE, INC.Inventors: Lan Vu, Hari Sivaraman, Uday Pundalik Kurkure, Xuwen Yu
-
Patent number: 11579942Abstract: Disclosed are aspects of virtual graphics processing unit (vGPU) scheduling-aware virtual machine migration. Graphics processing units (GPUs) that are compatible with a current virtual GPU (vGPU) profile for a virtual machine are identified. A scheduling policy matching order for a migration of the virtual machine is determined based on a current vGPU scheduling policy for the virtual machine. A destination GPU is selected based on a vGPU scheduling policy of the destination GPU being identified as a best available vGPU scheduling policy according to the scheduling policy matching order. The virtual machine is migrated to the destination GPU.Type: GrantFiled: June 2, 2020Date of Patent: February 14, 2023Assignee: VMWARE, INC.Inventors: Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
-
Patent number: 11568257Abstract: Method and system for training a neural network. The neural network is split into first and second portions. A k-layer first portion is sent to a client training/inference engine and the second portion is retained by a server training/inference engine. At the splitting point, the kth layer is a one-way function in output computation has a number of nodes that are less than any other layer of the first portion. The client training/inference engine trains the first portion with input data in a set of training data. The server training/inference engine receives a batch of outputs from the client training and applies them to the second portion to train the entire neural network.Type: GrantFiled: May 20, 2019Date of Patent: January 31, 2023Assignee: VMWARE, INC.Inventors: Lan Vu, Dimitrios Skarlatos, Aravind Bappanadu, Hari Sivaraman, Uday Kurkure, Vijayaraghavan Soundararajan
-
Publication number: 20220253341Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some examples, graphics processing units (GPU) are identified in a computing environment. Graphics processing requests are received. A graphics processing request includes a GPU memory requirement. The graphics processing requests are processed using a graphics processing request placement model that minimizes a number of utilized GPUs that are utilized to accommodate the requests. Virtual GPUs (vGPUs) are created to accommodate the graphics processing requests according to the graphics processing request placement model. The utilized GPUs divide their GPU memories to provide a subset of the plurality of vGPUs.Type: ApplicationFiled: April 29, 2022Publication date: August 11, 2022Inventors: Anshuj Garg, Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
-
Publication number: 20220237014Abstract: Disclosed are aspects of network function placement in virtual graphics processing unit (vGPU)-enabled environments. In one example a network function request is associated with a network function. A scheduler selects a vGPU-enabled GPU to handle the network function request. The vGPU-enabled GPU is selected in consideration of a network function memory requirement or a network function IO requirement. The network function request is processed using an instance of the network function within a virtual machine that is executed using the selected vGPU-enabled GPU.Type: ApplicationFiled: April 7, 2021Publication date: July 28, 2022Inventors: UDAY PUNDALIK KURKURE, Sairam Veeraswamy, Hari Sivaraman, Lan Vu, Avinash Kumar Chaurasia
-
Patent number: 11372683Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. Virtual graphics processing unit (vGPU) data is identified for graphics processing units (GPUs). A configured GPU list and an unconfigured GPU list are generated using the GPU data. The configured GPU list specifies configured vGPU profiles for configured GPUs. The unconfigured GPU list specifies a total GPU memory for unconfigured GPUs. A vGPU request is assigned to a vGPU of a GPU. The GPU is a first fit, from the configured GPU list or the unconfigured GPU list that satisfies a GPU memory requirement of the vGPU request.Type: GrantFiled: August 26, 2019Date of Patent: June 28, 2022Assignee: VMWARE, INC.Inventors: Anshuj Garg, Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
-
Publication number: 20220138001Abstract: Various examples are disclosed for generating heatmaps and plotting utilization of hosts in a datacenter environment. A collector virtual machine can rove the datacenter and collect utilization data. The utilization data can be plotted on a heatmap to illustrate utilization hotspots in the datacenter environment.Type: ApplicationFiled: January 18, 2022Publication date: May 5, 2022Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu
-
Patent number: 11282179Abstract: A system and method for assessing video quality of a video-based application inserts frame identifiers (IDs) into video content from the video-based application and recognizes the frame IDs from the video content using a text recognition neural network. Based on recognized frame IDs, a frame per second (FPS) metric of the video content is calculated. Based on the FPS metric of the video content, objective video quality of the video-based application is assessed.Type: GrantFiled: March 18, 2020Date of Patent: March 22, 2022Assignee: VMWARE, INC.Inventors: Lan Vu, Hari Sivaraman, Uday Pundalik Kurkure, Xuwen Yu
-
Patent number: 11263040Abstract: Various examples are disclosed for generating heatmaps and plotting utilization of hosts in a datacenter environment. A collector virtual machine can rove the datacenter and collect utilization data. The utilization data can be plotted on a heatmap to illustrate utilization hotspots in the datacenter environment.Type: GrantFiled: May 26, 2020Date of Patent: March 1, 2022Assignee: VMware, Inc.Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu
-
Patent number: 11263054Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some embodiments, a computing environment is monitored to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment, a plurality of vGPU requests are received. A respective vGPU request includes a GPU memory requirement. GPU configurations are determined in order to accommodate vGPU requests. The GPU configurations are determined based on an integer linear programming (ILP) vGPU request placement model. Configured vGPU profiles are applied for vGPU enabled GPUs, and vGPUs are created based on the configured vGPU profiles. The vGPU requests are assigned to the vGPUs.Type: GrantFiled: August 26, 2019Date of Patent: March 1, 2022Assignee: VMWARE, INC.Inventors: Anshuj Garg, Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
-
Publication number: 20210373924Abstract: Various examples are disclosed for generating heatmaps and plotting utilization of hosts in a datacenter environment. A collector virtual machine can rove the datacenter and collect utilization data. The utilization data can be plotted on a heatmap to illustrate utilization hotspots in the datacenter environment.Type: ApplicationFiled: May 26, 2020Publication date: December 2, 2021Inventors: Hari Sivaraman, Uday Pundalik Kurkure, Lan Vu
-
Publication number: 20210373972Abstract: Disclosed are aspects of virtual graphics processing unit (vGPU) scheduling-aware virtual machine migration. Graphics processing units (GPUs) that are compatible with a current virtual GPU (vGPU) profile for a virtual machine are identified. A scheduling policy matching order for a migration of the virtual machine is determined based on a current vGPU scheduling policy for the virtual machine. A destination GPU is selected based on a vGPU scheduling policy of the destination GPU being identified as a best available vGPU scheduling policy according to the scheduling policy matching order. The virtual machine is migrated to the destination GPU.Type: ApplicationFiled: June 2, 2020Publication date: December 2, 2021Inventors: Uday Pundalik Kurkure, Hari Sivaraman, Lan VU
-
Publication number: 20210334187Abstract: A scheme is provided for a processor to measure or estimate the dynamic capacitance (Cdyn) associated with an executing application and take a proportional throttling action. Proportional throttling has significantly less impact on performance and hence presents an opportunity to get back the lost bins and proportionally clip power if it exceeds a specification threshold. The ability to infer a magnitude of power excursion of a power virus event (and hence, the real Cdyn) above a set power threshold limit enables the processor to proportionally adjust the processor operating frequency to bring it back under the limit. With this scheme, the processor distinguishes a small power excursion versus a large one and reacts proportionally, yielding better performance.Type: ApplicationFiled: April 28, 2020Publication date: October 28, 2021Applicant: Intel CorporationInventors: Aman Sewani, Nazar Haider, Ankush Varma, Lan Vu
-
Publication number: 20210303327Abstract: Various aspects are disclosed for graphics processing unit (GPU)-remoting latency aware migration. In some aspects, a host executes a GPU-remoting client that includes a GPU workload. GPU-remoting latencies are identified for hosts of a cluster. A destination host is identified based on having a lower GPU-remoting latency than the host currently executing the GPU-remoting client. The GPU-remoting client is migrated from its current host to the destination host.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Inventors: Lan Vu, Uday Pundalik Kurkure, Hari Sivaraman