Patents Examined by Kevin X Lu
-
Patent number: 12093708Abstract: A method and an apparatus for scheduling a virtual machine are disclosed in. The method includes predicting resource data required by a virtual machine in a next time period to obtain a prediction result; obtaining used resource data and available resource data of candidate host machines; adding the prediction result to used resource data of each candidate host machine to obtain a superimposition result of each candidate host machine; and separately comparing the superimposition result of each candidate host machine with available resource data of each host machine, and selecting a target host machine corresponding to the virtual machine from the candidate host machines. The present disclosure solves the technical problem of a large waste of resources caused by the needs of a host machine to reserve resources for respective peaks of each virtual machine in the existing technologies.Type: GrantFiled: October 8, 2020Date of Patent: September 17, 2024Assignee: Alibaba Group Holding LimitedInventors: Zhengxiong Tian, Haihong Xu, Bo Zhu, Junjie Cai
-
Patent number: 12039356Abstract: Systems and methods are disclosed for migrating a virtual machine (VM) having a virtual function that maps resources of an artificial intelligence (AI) accelerator to the VM. A driver for the AI accelerator can generate a checkpoint of VM processes that make calls to the AI accelerator, and can the checkpoint can include a list and configuration of resources mapped to the AI accelerator by the virtual function. The driver can also access the code, data, and memory of the AI accelerator to generate a checkpoint of the AI accelerator status. When the VM is migrated to a new host, then either, or both, of these checkpoint frames can be used to ensure that resuming the VM on a new host having appropriate AI accelerator resources, can be successful resumed on the new host. One or both checkpoint frames can be captured based upon an event, in anticipation of the need to migrate the VM.Type: GrantFiled: January 6, 2021Date of Patent: July 16, 2024Assignees: BAIDU USA LLC, KUNLUNXIN TECHNOLOGY (BEIJING) COMPANY LIMITEDInventors: Zhibiao Zhao, Yueqiang Cheng
-
Patent number: 12032983Abstract: The access method includes: implementing a resident virtual CPU to which a physical CPU is always assigned and a non-resident virtual CPU to which a physical CPU is not always assigned, on the virtual machine in the virtual environment; and taking over the process of accessing the virtual device by the non-resident virtual CPU when accessing from the resident virtual CPU to the virtual device corresponding to the occupancy type physical device.Type: GrantFiled: December 21, 2020Date of Patent: July 9, 2024Assignee: DENSO CORPORATIONInventor: Shuichi Ogawa
-
Patent number: 12032979Abstract: A virtualization host is identified for an isolated run-time environment. One or more records generated at a security module of the host, which indicate that a first phase of a multi-phase establishment of an isolated run-time environment has been completed by a virtualization management component of the host, is transmitted to a resource verifier. In response to a host approval indicator from the resource verifier, the multi-phase establishment is completed at the virtualization host.Type: GrantFiled: November 6, 2019Date of Patent: July 9, 2024Assignee: Amazon Technologies, Inc.Inventor: Samartha Chandrashekar
-
Patent number: 12026555Abstract: Adjunct processor command-type filtering includes determining whether a target adjunct processor is configured to support a selected command-type filtering mode, and whether another adjunct processor is configured to support the selected command-type filtering mode. Based on determining that the target adjunct processor is not configured to support the selected command-type filtering mode and based on the other adjunct processor being configured to support the selected command-type filtering mode, a command is forwarded to the other adjunct processor for processing to determine whether the command is valid for the selected command-type filtering mode. An indication is obtained, based on processing at the other adjunct processor, of whether the command is valid for the selected command-type filtering mode. Based on obtaining an indication that the command is valid for the selected command-type filtering mode, the command is sent to the target adjunct processor for execution.Type: GrantFiled: December 15, 2020Date of Patent: July 2, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Louis P. Gomes
-
Patent number: 12001868Abstract: In a VM migration system 100, a controller 20 determines a priority group to which a VM whose performance is insufficient is desired to belong based on the amount of resource usage of each VM 1 and priority group setting information 14 acquired from a physical server 10. Upon acquiring performance guarantee failure alarm information, the controller 20 selects a VM to be migrated from VMs currently belonging to a priority group in which there are no vacancies, selects a physical server having the largest margin as another physical server to which the VM is to be migrated, and transmits migration instruction information to the physical server. The physical server migrates the selected VM to the other physical server.Type: GrantFiled: May 15, 2019Date of Patent: June 4, 2024Assignee: Nippon Telegraph and Telephone CorporationInventor: Yoshito Ito
-
Patent number: 11954520Abstract: A micro kernel scheduling method and apparatus are disclosed in embodiments of this disclosure. The method is applied to a software platform and includes: receiving a scheduling instruction for a current micro kernel; and switching the current micro kernel to a target micro kernel. In some embodiments, a micro kernel is switched directly according to a scheduling instruction, and this is completed without any thread of the software platform, which solves the problems in the conventional system of high micro kernel switching cost and poor real-time performance caused by one-to-one correspondence between micro kernels and threads of the software platform.Type: GrantFiled: December 23, 2019Date of Patent: April 9, 2024Assignee: Alibaba Group Holding LimitedInventors: Xu Zeng, Junjie Cai, Liangliang Zhu
-
Patent number: 11928620Abstract: In an embodiment, described herein is a system and method for creating a suggested task set to meet a target value. A cloud server, in response to receiving a request specifying a target value, retrieves completed task sets from a database. Each completed task set includes a same set of task categories. The cloud server derives a number of ratios from the retrieved completed task sets, including a composition ratio and a conversion rate for each task category, and an addition ratio for the number of completed task sets. Based on the derived ratios and the specified target value, the cloud server constructs the suggested task set, and displays in real-time the suggested task set together with current values for the task categories. The cloud server alerts users of a discrepancy between a current value and the corresponding suggested value for a task category when the discrepancy reaches a predetermined level.Type: GrantFiled: January 6, 2022Date of Patent: March 12, 2024Assignee: CLARI INC.Inventors: Xin Xu, Chunyue Du, Xincheng Ma, Kaiyue Wu, Venkat Rangan
-
Patent number: 11928491Abstract: Techniques are described for enabling model-driven server migration workflows in a cloud provider network. Cloud provider networks often provide various types of tools and services that enable users to migrate computing resources (e.g., servers, databases, applications, etc.) from users' on-premises computing environments to a cloud provider network. A model-driven server migration service as described herein comprises a plurality of modular migration components including, e.g., a snapshot validation component, a snapshot conversion component, an injection component, etc. The model-driven server migration service enables users to customize server migration workflows using server migration templates containing descriptive configurations for some or all of the provided migration components.Type: GrantFiled: November 23, 2020Date of Patent: March 12, 2024Assignee: Amazon Technologies, Inc.Inventors: Jiangtao Zhang, Wenjing Cao
-
Patent number: 11915027Abstract: An electronic control unit is configured to perform: allocating CPU resources to provide a plurality of virtual machines under a management by a hypervisor; monitoring an abnormality that occurs in one specific virtual machine by another virtual machine different from the specific virtual machine; outputting a stop request that requests stop of the allocation of the CPU resources to the specific virtual machine in a case that the abnormality is detected; and stopping allocation of the CPU resources to the specific virtual machine, by the hypervisor, in response to the stop request. The electronic control unit further comprises a DMA controller. The DMA controller transfers data transmitted to the specific virtual machine to a common memory which is common among a plurality of virtual machines.Type: GrantFiled: September 30, 2020Date of Patent: February 27, 2024Assignee: DENSO CORPORATIONInventor: Yasuharu Sugano
-
Patent number: 11899634Abstract: A multi-layer database sizing stack may generate prescriptive tier requisition tokens for controlling requisition of database-compute resources at database-compute tiers. The input layer of the database sizing stack may obtain historical data. The threading layer may be used to flag occurrences of single threading application execution. The change layer may be used to determine potential for a step based on compute utilization type data and assert flags indicating the potential. The step layer may determine if potential steps may be taken based on operation-rate type data and flush type data. The requisition layer may generate a tier requisition token based on the provisional requisition tokens generated at other layers and/or finalization directives obtained at the requisition layer.Type: GrantFiled: March 17, 2021Date of Patent: February 13, 2024Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Madhan Kumar Srinivasan, Guruprasad Pv
-
Patent number: 11893404Abstract: A system is provided that enables efficient traffic forwarding in a hypervisor. During operation, the hypervisor determines that a packet is from a first virtual machine (VM) running on the hypervisor and destined to a second VM running on a remote hypervisor. The hypervisor then includes a virtual local area network (VLAN) identifier of a transit VLAN (TVLAN) in a layer-2 header of the packet. The TVLAN is dedicated for inter-VM traffic associated with a distributed virtual routing (DVR) instance operating on the hypervisor and the remote hypervisor. Subsequently, the hypervisor sets a first media access control (MAC) address of the hypervisor as a source MAC address and a second MAC address of the remote hypervisor as a destination MAC address in the layer-2 header. The hypervisor then determines an egress port for the packet based on the second MAC address.Type: GrantFiled: October 23, 2019Date of Patent: February 6, 2024Assignee: Nutanix, Inc.Inventor: Ankur Kumar Sharma
-
Patent number: 11886903Abstract: Systems and methods for providing a continuous uptime of guest Virtual Machines (“VMs”) during upgrade of a virtualization host device. The methods comprising: connecting all of the guest VMs' frontends or drivers to at least one old control VM which is currently running on the virtualization host device and which contains old virtualization software; creating at least one upgraded control VM that contains new virtualization software and that is to replace the old VM in the virtualization host device; connecting the guest VMs' frontends or drivers to the upgraded VM; and uninstalling the old control VM from the virtualization host device.Type: GrantFiled: October 20, 2020Date of Patent: January 30, 2024Inventor: Marcus Granado
-
Patent number: 11847482Abstract: Methods and systems for balancing resources in a virtual machine computing environment are disclosed. A server can receive data illustrating the configuration of host machines and virtual machines in client computing environment. A simulated computing environment can be created that mirrors the configuration of the client computing environment. Data relating to resource usage (e.g., processor, memory, and storage) of the host machines can be received. The resource usage can be simulated in the simulated computing environment to mirror the usage of the client computing environment. A recommendation to execute a migration of a virtual machine can be received from the simulated computing environment. Instructions to execute a migration corresponding to the recommended migration can be generated and sent to the client computing environment.Type: GrantFiled: July 24, 2020Date of Patent: December 19, 2023Assignee: VMWARE, INC.Inventors: Rahul Ajmera, Amit Ratnapal Sangodkar, Jivan Madtha
-
Patent number: 11842218Abstract: A virtual machine management service obtains a request to instantiate a virtual machine image (VMI) to implement a virtual network function (VNF). The request specifies a set of processor requirements corresponding to instantiation of the VMI. In response to the request, the service identifies, from a server comprising a set of processor cores, available processor capacity. The service determines, based on the available processor capacity and the set of processor requirements, whether to instantiate the VMI on to a subset of processor cores of the server. Based on this determination, the service instantiates the VMI on to the subset of processor cores to implement the VNF.Type: GrantFiled: January 24, 2023Date of Patent: December 12, 2023Assignee: Cisco Technology, Inc.Inventors: Yanping Qu, Sabita Jasty, Kaushik Pratap Biswas, Yegappan Lakshmanan
-
Patent number: 11809913Abstract: Disclosed herein are embodiments for managing the placement of virtual machines in a virtual machine network. In an embodiment, a method involves determining whether to separate at least one virtual machine in a set of virtual machines supporting a process and running on a first host computer from other virtual machines in the set. If at least one virtual machine is to be separated, then at least one virtual machine is selected based on a number of memory pages changed. The selected virtual machine is then separated from the other virtual machines in the set.Type: GrantFiled: September 24, 2021Date of Patent: November 7, 2023Assignee: VMWare, Inc.Inventors: Kalyan Saladi, Ganesha Shanmuganathan
-
Patent number: 11803411Abstract: An illustrative “Live Synchronization” feature in a data storage management system can reduce the downtime that arises in failover situations. The illustrative Live Sync embodiment uses backup data to create and maintain a ready (or “warm”) virtualized computing platform comprising one or more virtual machines (“VMs”) that are configured and ready to be activated and take over data processing from another data processing platform operating in the production environment. The “warm” computing platform awaits activation as a failover solution for the production system(s) and can be co-located at the production data center, or configured at a remote or disaster recovery site, which in some embodiments is configured “in the cloud.” Both local and remote illustrative embodiments are discussed herein. An “incremental forever” approach can be combined with deduplication and synthetic full backups to speed up data transfer and update the disaster recovery sites.Type: GrantFiled: February 4, 2021Date of Patent: October 31, 2023Assignee: Commvault Systems, Inc.Inventors: Henry Wallace Dornemann, Ajay Venkat Nagrale, Rahul S. Pawar, Ananda Venkatesha
-
Patent number: 11797347Abstract: Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes.Type: GrantFiled: January 25, 2021Date of Patent: October 24, 2023Assignee: International Business Machines CorporationInventors: Paul M. Dantzig, Arun Kwangil Iyengar, Francis Nicholas Parr, Gong Su
-
Patent number: 11768743Abstract: A system and method include migrating, by a migration controller, a first entity of a first subset of entities from a source site to a target site in a virtual computing system based on an asynchronous mode of replication. The system and method also include replicating, by the migration controller, data of a second entity of a second subset of entities from the source site to the target site based on a synchronous mode of replication in parallel with the migration of the first entity for dynamically adjusting a recovery time objective parameter.Type: GrantFiled: July 29, 2020Date of Patent: September 26, 2023Assignee: Nutanix, Inc.Inventors: Kiran Tatiparthi, Ankush Jindal, Monil Devang Shah, Mukul Sharma, Shubham Gupta, Sharad Maheshwari, Kilol Surjan
-
Patent number: 11755368Abstract: Systems and methods are disclosures for scheduling code in a multiprocessor system. Code is portioned into code blocks by a compiler. The compiler schedules execution of code blocks in nodes. The nodes are connected in a directed acyclical graph with a top node, terminal node and a plurality of intermediate nodes. Execution of the top node is initiated by the compiler. After executing at least one instance of the top node, an instruction in the code block indicates to the scheduler to initiate at least one intermediary node. The scheduler schedules a thread for execution of the intermediary node. The data for the nodes resides in a plurality of data buffers; the index to the data buffer is stored in a command buffer.Type: GrantFiled: August 8, 2021Date of Patent: September 12, 2023Assignee: Blaize , Inc.Inventors: Satyaki Koneru, Val G. Cook, Ke Yin