Patents Examined by Meng-Ai T. An
  • Patent number: 11537518
    Abstract: Constraining memory use for overlapping virtual memory operations is described. The memory use is constrained to prevent memory from exceeding an operational threshold, e.g., in relation to operations for modifying content. These operations are implemented according to algorithms having a plurality of instructions. Before the instructions are performed in relation to the content, virtual memory is allocated to the content data, which is then loaded into the virtual memory and is also partitioned into data portions. In the context of the described techniques, at least one of the instructions affects multiple portions of the content data loaded in virtual memory. When this occurs, the instruction is carried out, in part, by transferring the multiple portions of content data between the virtual memory and a memory such that a number of portions of the content data in the memory is constrained to the memory that is reserved for the operation.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: December 27, 2022
    Assignee: Adobe Inc.
    Inventors: Chih-Yao Hsieh, Zhaowen Wang
  • Patent number: 11537446
    Abstract: This document relates to orchestration and scheduling of services. One example method involves obtaining dependency information for an application. The dependency information can represent data dependencies between individual services of the application. The example method can also involve identifying runtime characteristics of the individual services and performing automated orchestration of the individual services into one or more application processes based at least on the dependency information and the runtime characteristics.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: December 27, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Robert Lovejoy Goodwin, Janaina Barreiro Gambaro Bueno, Sitaramaswamy V. Lanka, Javier Garcia Flynn, Pedram Faghihi Rezaei, Karthik Pattabiraman
  • Patent number: 11537420
    Abstract: Aspects of the disclosure provide for mechanisms for memory protection of virtual machines in a computer system. A method of the disclosure includes: determining a plurality of host latency times for a plurality of processor power states of a processor of a host computer system; comparing, by a hypervisor executed on the host computer system, each of the host latency times to a target latency time associated with a virtual machine running on the host computer system; mapping the plurality of processor power states to a plurality of host power states in view of the comparison; and providing the host power states to the virtual machine.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: December 27, 2022
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11531570
    Abstract: Systems and methods for adaptively provisioning a distributed event data store of a multi-tenant architecture are provided. According to one embodiment, a managed security service provider (MSSP) maintains a distributed event data store on behalf of each tenant of the MSSP. For each tenant, the MSSP periodically determines a provisioning status for a current active partition of the distributed event data store of the tenant. Further, when the determining indicates an under-provisioning condition exits, the MSSP dynamically increases number of resource provision units (RPUs) to be used for a new partition to be added to the partitions for the tenant by a first adjustment ratio. While, when the determining indicates an over-provisioning condition exists, the MSSP dynamically decreases the number of RPUs to be used for subsequent partitions added to the partitions for the tenant by a second adjustment ratio.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: December 20, 2022
    Assignee: Fortinet, Inc.
    Inventors: Jun He, Partha Bhattacharya, Jae Yoo
  • Patent number: 11531560
    Abstract: An agent and a configuration interface permit custom-level customizations for synchronizing a replica of an enterprise system over a network connection with a replicator. The replicator produces the replica as a Virtual Machine (VM) that is maintained on a portal server that is remote from an enterprise server that hosts the enterprise system.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 20, 2022
    Assignee: NCR Corporation
    Inventors: Chario Bardoquillo Maxilom, Clem Paradero Pepito, Stanley Reginald Sanchez, III
  • Patent number: 11531568
    Abstract: A time-aware application task scheduling system for a green data center (GDC) that includes a task scheduling processor coupled to one or more queue processors and an energy collecting processor connected to one or more renewable energy sources and a grid power source. The systems is capable of determining a service rate for a plurality of servers to process a plurality of application tasks in the GDC and scheduling, via processing circuitry, one or more of the application tasks to be executed in one or more of the servers at a rate according to a difference in an accumulated arriving rate for the plurality of application tasks into the one or more queues and a removal rate for the plurality of application tasks from the one or more queues. The system is further capable of removing the one or more application tasks from their associated queues for execution in the scheduled one or more servers.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: December 20, 2022
    Assignee: King Abdulaziz University
    Inventors: Yusuf Al-Turki, Haitao Yuan, Jing Bi, Mengchu Zhou, Ahmed Chiheb Ammari, Abdullah Abusorrah, Khaled Sadraoui
  • Patent number: 11494214
    Abstract: At a virtualization host, an isolated run-time environment is established within a compute instance. The configuration of the isolated run-time environment is analyzed by a security manager of the hypervisor of the host. After the analysis, computations are performed at the isolated run-time environment.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: November 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Anthony Nicholas Liguori, Eric Jason Brandwine, Matthew Shawn Wilson
  • Patent number: 11487536
    Abstract: A computer-implemented method or system is provided to automate actions for one or more applications executed via a platform using at least one virtual machine in a guest system. Each virtual machine includes a guest operating system, a guest agent and an application to be executed on the virtual machine. The method or system stores in a memory user-defined automation actions and causal relationships between the user-defined automation actions from which an automation graph is derived for the application to be executed on the virtual machine on the guest system; launches the guest system and the virtual machine via the platform; and executes the user-defined automation actions via the guest agent of the virtual machine according to the automation graph after the guest system and the virtual machine are launched.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: November 1, 2022
    Assignee: AVEVA Software, LLC
    Inventors: Johan Prinsloo, Geoffrey Tarcha, Roy Li, Jagan Annamalai, Chau Duong, Andrew Goorchenko, Marlina Lukman, Ian Willetts
  • Patent number: 11487573
    Abstract: Methods and systems for automating execution of a workflow by integrating security applications of a distributed system into the workflow are provided. In embodiments, a system includes an application server in a first cloud, configured to receive a trigger to execute the workflow. The workflow includes tasks to be executed in a device of a second cloud. The application server sends a request to process the task to a task queue module. The task queue module places the task request in a queue, and a worker hosted in the device of the second cloud retrieves the task request from the queue and processes the task request by invoking a plugin. The plugin interacts with a security application of the device of the second cloud to execute the task, which yields task results. The task results are provided to the application server, via the worker and the task queue module.
    Type: Grant
    Filed: May 7, 2019
    Date of Patent: November 1, 2022
    Assignee: Thomson Reuters Enterprise Centre GmbH
    Inventors: Vishal Dilipkumar Parikh, William Stuart Ratner, Akshar Rawal
  • Patent number: 11487566
    Abstract: A method for migrating a virtual machine (VM) includes establishing a first connection to a first cloud computing system executing a first VM, and establishing a second connection to a second cloud computing system managed by a second cloud provider, which is different form the first cloud provider. The method further includes instantiating a second VM designated as a destination VM in the second cloud computing system, and installing a migration agent on each of the first VM and the second VM. The migration agents execute a migration process of the first VM to the second VM by (1) iteratively copying guest data from the first VM to the second VM until a switchover criteria of the migration operation is met, and (2) copying a remainder of guest data from the first VM to the second VM when the switchover criteria is met.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: November 1, 2022
    Assignee: VMWARE, INC.
    Inventors: Nathan L. Prziborowski, Gabriel Tarasuk-Levin, Arunachalam Ramanathan, Prachetaa Raghavan, Benjamin Yun Liang, Haripriya Rajagopal, Longhao Shu
  • Patent number: 11481255
    Abstract: Provided is a method, computer program product, and coherent computer system for improving memory management by establishing cooperation between an operating system and a coherent accelerator device (CAD). The CAD may retrieve a set of work elements for completion from a work queue. The CAD may determine a length of time required to complete the set of work elements. The CAD may identify a set of memory pages needed for completing the set of work elements. The CAD may communicate the set of memory pages and the length of time required to complete the set of work elements to a virtual memory manager.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: October 25, 2022
    Assignee: International Business Machines Corporation
    Inventors: Chetan L. Gaonkar, Niranjan Behera, Geeta Devi Akoijam, Vamshikrishna Thatikonda
  • Patent number: 11481239
    Abstract: Methods and apparatus to customize deployment using approvals are disclosed. An example deployment approval manager can generate a first Approval Payload including an initial application component approval proposal of an application component that provides a logical template of an application. A deployment event broker can reply-back to the deployment approval manager with a second approval payload that includes a processed application component approval proposal.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: October 25, 2022
    Assignee: VMware, Inc.
    Inventors: Boris Savov, Rostislav Georgiev, Lazarin Lazarov, Ventsyslav Raikov, Ivanka Baneva
  • Patent number: 11474871
    Abstract: The embodiments herein describe a virtualization framework for cache coherent accelerators where the framework incorporates a layered approach for accelerators in their interactions between a cache coherent protocol layer and the functions performed by the accelerator. In one embodiment, the virtualization framework includes a first layer containing the different instances of accelerator functions (AFs), a second layer containing accelerator function engines (AFE) in each of the AFs, and a third layer containing accelerator function threads (AFTs) in each of the AFEs. Partitioning the hardware circuitry using multiple layers in the virtualization framework allows the accelerator to be quickly re-provisioned in response to requests made by guest operation systems or virtual machines executing in a host. Further, using the layers to partition the hardware permits the host to re-provision sub-portions of the accelerator while the remaining portions of the accelerator continue to operate as normal.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: October 18, 2022
    Assignee: XILINX, INC.
    Inventors: Millind Mittal, Jaideep Dastidar
  • Patent number: 11461120
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for rack nesting in virtualized server systems. An example apparatus includes a resource discoverer to identify resources to be allocated to the nested rack based on a policy indicative of one or more physical racks from which to identify the resources, and determine candidate resources from the resources to be allocated to the nested rack based on a capacity parameter indicative of a quantity of the resources available to be allocated to the nested rack, the candidate resources to have first hypervisors, and a nested rack controller to generate the nested rack by deploying second hypervisors on the first hypervisors, the second hypervisors to facilitate communication between the candidate resources and one or more virtual machines on the second hypervisors, the nested rack to execute one or more computing tasks based on the communication.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: October 4, 2022
    Assignee: VMWARE, INC.
    Inventors: Shubham Verma, Ravi Kumar Reddy Kottapalli, Samdeep Nayak, Kannan Balasubramanian, Suket Gakhar
  • Patent number: 11461144
    Abstract: Method by which a plurality of processes are assigned to a plurality of computational resources, each computational resource providing resource capacities in a plurality of processing dimensions. Processing loads are associated in each processing dimension with each process. A loading metric is associated with each process based on the processing loads in each processing dimension. One or more undesignated computational resources are designated from the plurality of computational resources to host unassigned processes from the plurality of processes. In descending order of the loading metric one unassigned process is assigned from the plurality of processes to each one of the one or more designated computational resources. In ascending order of the loading metric any remaining unassigned processes are assigned from the plurality of processes to the one or more designated computational resources whilst there remains sufficient resource capacity in each of the plurality of processing dimensions.
    Type: Grant
    Filed: October 21, 2015
    Date of Patent: October 4, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Chris Tofts
  • Patent number: 11461503
    Abstract: A method includes: receiving a service participation request of a target service transmitted by a user terminal, wherein the user terminal comprises an iOS operating system; obtaining target identification data from a system server according to the service participation request, wherein the target identification data comprises first identification data used for identifying whether the user terminal participates in the target service, and/or second identification data used for identifying whether the device data of the user terminal is modified, and the system server is a server corresponding to the iOS operating system; and according to the target identification data, determining whether to allow the user terminal to participate in the target service.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: October 4, 2022
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventor: Peng Zhang
  • Patent number: 11442790
    Abstract: This application relates to a resource scheduling method, a resource scheduling system, a server, and a storage medium. The resource scheduling method includes receiving a virtual machine application request sent by a terminal, wherein the virtual machine application request includes a target virtual machine label. The resource scheduling method further includes comparing the target virtual machine label with a current virtual machine label of each host computer in a cluster to determine a target host computer, wherein the target host computer includes no virtual machine label matching the target virtual machine label, enabling the target host computer to create a first virtual machine, and setting a label of the first virtual machine as the target virtual machine label.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: September 13, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xiao Dong Pan, Lin Hong Hu, Yan Mo, Hong Zhu
  • Patent number: 11429450
    Abstract: Disclosed are various embodiments for assigning compute kernels to compute accelerators that form an aggregated virtualized compute accelerator. A directed, acyclic graph (DAG) representing a workload assigned to a virtualized compute accelerator is generated. The workload can include a plurality of compute kernels and the DAG comprising a plurality of nodes and a plurality of edges, each of the nodes representing a respective compute kernel, each edge representing a dependency between a respective pair of the compute kernels, and the virtualized compute accelerator representing a logical interface for a plurality of compute accelerators. The DAG can be analyzed to identify sets of dependent compute kernels, each set of dependent compute kernels being independent of the other sets of dependent compute kernels and execution of at least one compute kernel in a set of dependent compute kernels depending on a previous execution of another computer kernel in the set of dependent compute kernels.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: August 30, 2022
    Assignee: VMWARE, INC.
    Inventor: Matthew D. McClure
  • Patent number: 11429442
    Abstract: Systems and techniques are described for using virtual machines to write parallel and distributed applications. One of the techniques includes receiving a job request, wherein the job request specifies a first job to be performed by a plurality of a special purpose virtual machines, wherein the first job includes a plurality of tasks; selecting a parent special purpose virtual machine from a plurality of parent special purpose virtual machines to perform the first job; instantiating a plurality of child special purpose virtual machines from the selected parent special purpose virtual machine; partitioning the plurality of tasks among the plurality of child special purpose virtual machines by assigning one or more of the plurality of tasks to each of the child special purpose virtual machines; and performing the first job by causing each of the child special purpose virtual machines to execute the tasks assigned to the child special purpose virtual machine.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: August 30, 2022
    Assignee: VMware, Inc.
    Inventors: Jayanth Gummaraju, Gabriel Tarasuk-Levin
  • Patent number: 11422856
    Abstract: Techniques are disclosed relating to scheduling program tasks in a server computer system. An example server computer system is configured to maintain first and second sets of task queues that have different performance characteristics, and to collect performance metrics relating to processing of program tasks from the first and second sets of task queues. Based on the collected performance metrics, the server computer system is further configured to update a scheduling algorithm for assigning program tasks to queues in the first and second sets of task queues. In response to receiving a particular program task associated with a user transaction, the server computer system is also configured to select the first set of task queues for the particular program task, and to assign the particular program task in a particular task queue in the first set of task queues.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: August 23, 2022
    Assignee: PayPal, Inc.
    Inventors: Xin Li, Libin Sun, Chao Zhang, Xiaohan Yun, Jun Zhang, Frédéric Tu, Yang Yu, Lei Wang, Zhijun Ling