Patents Examined by Jacob D Dascomb
  • Patent number: 11263054
    Abstract: Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some embodiments, a computing environment is monitored to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment, a plurality of vGPU requests are received. A respective vGPU request includes a GPU memory requirement. GPU configurations are determined in order to accommodate vGPU requests. The GPU configurations are determined based on an integer linear programming (ILP) vGPU request placement model. Configured vGPU profiles are applied for vGPU enabled GPUs, and vGPUs are created based on the configured vGPU profiles. The vGPU requests are assigned to the vGPUs.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: March 1, 2022
    Assignee: VMWARE, INC.
    Inventors: Anshuj Garg, Uday Pundalik Kurkure, Hari Sivaraman, Lan Vu
  • Patent number: 11263053
    Abstract: Tags are applied to gather information about an application that has been deployed across a plurality of resources, so that application resources can be brought under management and a blueprint for the deployed application can be constructed using information gathered from the tags. A method of identifying resources of a deployed application for management comprises applying tags to currently deployed resources of the application including virtual machines (VMs), storing tag data in association with the VMs to which the tag has been applied in an inventory data store, searching the inventory data store for VMs to which first tags have been applied, wherein the first tags each have tag value that identifies the deployed application, searching the inventory data store for second tags that have been applied to the VMs, and adding the resources identified by the first and second tags to a group of application resources to be managed.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: March 1, 2022
    Assignee: VMware, Inc.
    Inventors: Bryan P. Halter, Alexey Intelegator, Chinmay Gore, Maximilian Choly
  • Patent number: 11243799
    Abstract: An apparatus includes a plurality of virtual machines, a hypervisor coupled to the plurality of virtual machines, and a graphical processing unit (GPU) coupled to the hypervisor. The plurality of virtual machines are allocated a plurality of time slices. The hypervisor initiates a world switch to a first virtual machine of the plurality of virtual machines. The GPU makes a determination as to whether to adjust the time slice associated with the first virtual machine based on an assessment of time slice adjustment parameters related to an execution time of at least one of the plurality of virtual machines.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: February 8, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander Fuad Ashkar, Hans Fernlund
  • Patent number: 11237868
    Abstract: Systems and methods for machine learning-based power capping and virtual machine placement in cloud platforms are disclosed. A method includes applying a machine learning model to predict whether a request for deployment of a virtual machine corresponds to deployment of a user-facing (UF) virtual machine or a non-user-facing (NUF) virtual machine. The method further includes sorting a list of candidate servers based on both a chassis score and a server score for each server to determine a ranked list of the candidate servers, where the server score depends at least on whether the request for the deployment of the virtual machine is determined to be a request for a deployment of a UF virtual machine or a request for a deployment of an NUF virtual machine. The method further includes deploying the virtual machine to a server with highest rank among the ranked list of the candidate servers.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: February 1, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ioannis Manousakis, Marcus F. Fontoura, Alok Gautam Kumbhare, Ricardo G. Bianchini, Nithish Mahalingam, Reza Azimi
  • Patent number: 11232034
    Abstract: A cache circuit associated with a hypervisor system is disclosed. The cache circuit comprises a cache memory circuit comprising a plurality of cachelines, wherein each cacheline is configured to store data associated with one or more virtual machines (VMs) of a plurality of VMs associated with the hypervisor system and a plurality of tag array entries respectively associated with the plurality of cachelines. In some embodiments, each tag array entry of the plurality of tag entries comprises a tag field configured to store a tag identifier (ID) that identifies an address of a main memory circuit to which a data stored in the corresponding cacheline is associated and a VM tag field configured to store a VM ID associated with a VM to which the data stored in the corresponding cacheline is associated.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 25, 2022
    Assignee: Infineon Technologies AG
    Inventors: Manoj Kumar Harihar, Romain Ygnace
  • Patent number: 11221885
    Abstract: A method for allocating resources for a machine learning model is disclosed. A machine learning model to be executed on a special purpose machine learning model processor is received. A computational data graph is generated from the machine learning model. The computational dataflow graph represents the machine learning model which includes nodes, connector directed edges, and parameter directed edges. The operations of the computational dataflow graph is scheduled and then compiled using a deterministic instruction set architecture that specifies functionality of a special purpose machine learning model processor. An amount of resources required to execute the computational dataflow graph is determined. Resources are allocated based on the determined amounts of resources required to execute the machine learning model represented by the computational dataflow graph.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: January 11, 2022
    Assignee: Google LLC
    Inventors: Jonathan Ross, John Michael Stivoric
  • Patent number: 11221884
    Abstract: According to one aspect of the present disclosure, a method and technique for hybrid virtual machine configuration management is disclosed. The method includes assigning to a first set of virtual resources associated with a virtual machine a first priority and assigning to a second set of virtual resources associated with the virtual machine a second priority lower than the first priority. An operating system of the virtual machine is provided with the first and second priorities assigned to the respective first and second sets of virtual resources. The operating system dispatches to process a workload the virtual resources from the first set before dispatching the virtual resources from the second set.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 11, 2022
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Wen-Tzer T. Chen, William A. Maron, Mysore S. Srinivas, Basu Vaidyanathan
  • Patent number: 11221880
    Abstract: The present invention provides an adaptive computing resource allocation approach for virtual network functions, including the following two steps: Step 1: predicting VNFs' real-time computing resource requirements; Step 1.1: offline profiling different types of VNFs, to obtain a parameter relation between the required amount of computing resources and the ingress packet rate; Step 1.2: online monitoring the network traffic information of each VNF, and predicting VNFs' required amount of computing resources with combination of the parameters in Step 1.1; Step 2: reallocating computing resources based on VNFs' resource requirements. The computing resource allocation approach includes a direct allocation approach and an incremental approach. The adaptive computing resource allocation approach for virtual network functions of the present invention allocates computing resources based on VNFs' actual requirements, and remedies performance bottlenecks caused by fair allocation.
    Type: Grant
    Filed: July 4, 2017
    Date of Patent: January 11, 2022
    Assignee: Shanghai Jiao Tong University
    Inventors: Haibing Guan, Ruhui Ma, Jian Li, Xiaokang Hu
  • Patent number: 11216359
    Abstract: Disclosed herein are techniques for identifying sources of software-based malfunctions. Techniques include identifying a potential software malfunction in a system, the system having multiple code sets associated with a plurality of different software sources; accessing a line-of-code behavior and relation model representing execution of functions of the code sets; identifying, based on the line-of-code behavior and relation model, a code set determined to have the potential to cause, a least in part, the potential software malfunction; and determining a source identifier of the identified code set.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: January 4, 2022
    Assignee: Aurora Labs Ltd.
    Inventors: Zohar Fox, Carmit Sahar
  • Patent number: 11194688
    Abstract: Techniques for an optimization service of a service provider network to generate an architecture diagram that represents an architecture of a web-based application. The optimization service may use the architecture diagram to determine modifications or changes to make to the application. For example, the optimization service may compare the architecture diagram with optimized architecture diagrams that represent application best practices, and determine the modifications or change to make to the application to optimize the application and bring the application in-line with best practices. Further, the optimization service may use the architecture diagram to generate a visualization, and provide the user account with the visualization of the architecture diagram to show users their application architecture.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: December 7, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Malcolm Featonby, Jacob Adam Gabrielson, Kai Fan Tang, John Merrill Phillips, Leslie Johann Lamprecht, Letian Feng, Roberto Pentz De Faria
  • Patent number: 11194632
    Abstract: Methods, systems and computer program products for configuring microservices platforms in one or more computing clusters. In one of the computing clusters, a request to instantiate a microservice platform is received, wherein the request is received in a computing cluster having a first node and a second node, and wherein the first node and second node comprise a first virtualized storage controller and a second virtualized storage controller, respectively. The storage controllers each manage their respective storage pools comprising local storage devices. A first microservice manager is deployed on the first node and a second microservice manager is deployed on the second node.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: December 7, 2021
    Assignee: Nutanix, Inc.
    Inventors: Pravin Singhal, Anand Jayaraman, Aroosh Sohi
  • Patent number: 11188375
    Abstract: Virtual machine mobility for virtual machine using remote direct memory access (RDMA) connections, including: receiving a virtual machine (VM) mobility request to transfer a virtual machine from a source host to a destination host; migrating application data transfer from an RDMA connection of the virtual machine to a Transmission Control Protocol (TCP) connection of the virtual machine, wherein the RDMA connection and the TCP connection are facilitated by a physical network adapter; migrating the TCP connection to a virtual network adapter of the virtual machine; and transferring the virtual machine from the source host to the destination host.
    Type: Grant
    Filed: August 9, 2019
    Date of Patent: November 30, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vishal Mansur, Srinivas Gundurao, Sivakumar Krishnasamy, Jeffrey Messing
  • Patent number: 11188376
    Abstract: Devices and techniques are generally described for an edge computing system. In various examples, the edge computing system may comprise a host kernel comprising a kernel-based virtual machine hypervisor. In some examples, the edge computing system may comprise virtualization software effective to communicate with the kernel-based virtual machine hypervisor to execute guest virtual machines. In various further examples, the edge computing system may comprise an engine virtual machine with access to at least one hardware component. The edge computing system may further comprise a control plane virtual machine. The control plane virtual machine may include components effective to receive a first request and determine an application corresponding to the first request, and a virtual machine manager effective to control the virtualization software to generate a virtual machine, configured in accordance with a configuration specific for the virtual machine, for executing the application.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: November 30, 2021
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: James Michael Alexander, Aaron Lockey, Oscar Padilla, Aldon Dominic Almeida, Rishikesh Bola Satyanarayan, Maksim Vakhno
  • Patent number: 11182352
    Abstract: In an embodiment, a computer-implemented method for dynamically exchanging runtime state data between datacenters using a controller bridge is disclosed. In an embodiment, the method comprises: requesting, and receiving, one or more first runtime state data from one or more logical sharding central control planes (“CCPs”) controlling one or more logical sharding hosts; requesting, and receiving, one or more second runtime state data from one or more physical sharding CCPs controlling one or more physical sharding hosts; aggregating, to aggregated runtime state data, the one or more first runtime state data and the one or more second runtime state data; determining updated runtime state data based on the aggregated runtime state data, the one or more first runtime state data, and the one or more second runtime state data; and transmitting the updated runtime state data to the logical sharding CCPs and physical sharding CCPs.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: November 23, 2021
    Assignee: VMware, Inc.
    Inventors: Da Wan, Jianjun Shen, Feng Pan, Pankaj Thakkar, Donghai Han
  • Patent number: 11163584
    Abstract: Systems and methods can enable select virtual session capabilities on a user device configured to access a virtual session, which is an instance of a virtual machine. The user device can receive and forward to a gateway sever, a request to launch a virtual session. Based on the virtual session launch request, the gateway server can obtain a compliance profile determined from operational data for the user device and compare it to a minimum access policy (“MAP”). The MAP can include threshold or binary values for states of a group of user device operational aspects. Where the compliance profile satisfies the MAP, the gateway can permit user device access a virtual session hosted on a virtual machine (“VM”) server. The virtual session can be configured at the VM server based on the compliance profile so as to allow access to a portion of a full virtual session capability scheme.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: November 2, 2021
    Assignee: VMWARE INC.
    Inventors: Sisimon Soman, Vignesh Raja Jayaraman
  • Patent number: 11157461
    Abstract: The present systems and methods generally relate to the elimination or reduction of network traffic required to support operations on a file of any size stored remotely on a file server or network share. More particularly, the present systems and methods relate to encapsulation of a remote file in such a way that the file appears to the local operating system and any local applications to be residing locally, thus overcoming some of the performance issues associated with multiple users accessing a single network share (e.g., CIFS share) and/or a single user remotely accessing a large file.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: October 26, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Randall R. Cook, Kevin Goodman
  • Patent number: 11144370
    Abstract: The present disclosure provides a communication method for virtual machines, an electronic device, and a non-transitory computer readable storage medium. The communication method for virtual machines suitable for a virtual machine architecture comprises the steps of: transmitting, through a shared link, an interrupt instruction to a second virtual machine by a first virtual machine; reading, in a shared configuration database, an instruction data corresponding to the interrupt instruction by the second virtual machine; and executing the instruction data and transmitting a result data through a virtual control plane to the first virtual machine by the second virtual machine, to exchange the data between the first virtual machine and the second virtual machine through the virtual control plane.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: October 12, 2021
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Wei-Chuan Wang, Po-Kai Chuang, Yu-Ting Ting, Chien-Kai Tseng, Tse Ho Lin
  • Patent number: 11144423
    Abstract: A method and network device for resource-aware dynamic monitoring of application units is described. Instantiation of a monitoring element is caused. A first usage status of resources is obtained. Configuration parameters of the monitoring element are set based upon the first usage status of the resources. The monitoring element is scheduled based upon the first usage status of the resources. A second usage status of the resources is obtained. A determination is performed of whether to update the monitoring element based upon the second usage status of the resources. Responsive to determining that the monitoring element is to be updated, at least one of the following is performed: (i) updating the one or more configuration parameters of the monitoring element based upon the second usage status of the resources, and (ii) rescheduling the monitoring element based upon the second usage status of the resources.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: October 12, 2021
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Farnaz Moradi, Catalin Meirosu, Christofer Flinta, Andreas Johnsson
  • Patent number: 11144087
    Abstract: Performance monitors are placed on computational units in different clock domains of an integrated circuit. A central dispatcher generates trigger signals to the performance monitors to cause the performance monitors to respond to the trigger signals with packets reporting local performance counts for the associated computational units. The data in the packets are correlated into a single clock domain. By applying a trigger and reporting system, the disclosed approach can synchronize the performance metrics of the various computational units in the different clock domains without having to route a complex global clock reference signal to all of the performance monitors.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: October 12, 2021
    Assignee: NVIDIA Corporation
    Inventors: Roger Allen, Alan Menezes, Tom Ogletree, Shounak Kamalapurkar, Abhijat Ranade
  • Patent number: 11138522
    Abstract: A method for allocating resources for a machine learning model is disclosed. A machine learning model to be executed on a special purpose machine learning model processor is received. A computational data graph is generated from the machine learning model. The computational dataflow graph represents the machine learning model which includes nodes, connector directed edges, and parameter directed edges. The operations of the computational dataflow graph is scheduled and then compiled using a deterministic instruction set architecture that specifies functionality of a special purpose machine learning model processor. An amount of resources required to execute the computational dataflow graph is determined. Resources are allocated based on the determined amounts of resources required to execute the machine learning model represented by the computational dataflow graph.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: October 5, 2021
    Assignee: Google LLC
    Inventors: Jonathan Ross, John Michael Stivoric