Patents Examined by Zujia Xu
-
Patent number: 11635995Abstract: A multi-cloud service mesh orchestration platform can receive a request to deploy an application as a service mesh application. The platform can tag the application with governance information (e.g., TCO, SLA, provisioning, deployment, and operational criteria). The platform can partition the application into its constituent components, and tag each component with individual governance information. For first time steps, the platform can select and perform a first set of actions for deploying each component to obtain individual rewards, state transitions, and expected returns. The platform can determine a reinforcement learning policy for each component that maximizes a total reward for the application based on the individual rewards, state transitions, and expected returns of each first set of actions selected and performed for each component. For second time steps, the platform can select and perform a second set of actions for each component based on the reinforcement learning policy for the component.Type: GrantFiled: July 16, 2019Date of Patent: April 25, 2023Assignee: Cisco Technology, Inc.Inventors: Rohit Bahl, Paul Clyde Sherrill, Stephen Joseph Williams
-
Cost-savings using ephemeral hosts in infrastructure as a service environments based on health score
Patent number: 11593177Abstract: Various examples are disclosed for placing virtual machine (VM) workloads in a computing environment. Ephemeral workloads can be placed onto reserved instances or reserved hosts in a cloud-based VM environment. If a request to place a guaranteed workload is received, ephemeral workloads can be evacuated to make way for the guaranteed workload.Type: GrantFiled: March 18, 2020Date of Patent: February 28, 2023Assignee: VMWARE, INC.Inventors: Dragos Victor Misca, Sahan Bamunavita Gamage, Pranshu Jain, Zhelong Pan -
Patent number: 11579924Abstract: Techniques are disclosed for scheduling artificial intelligence model partitions for execution in an information processing system. For example, a method comprises the following steps. An intermediate representation of an artificial intelligence model is obtained. A reversed computation graph corresponding to a computation graph generated based on the intermediate representation is obtained. Nodes in the reversed computation graph represent functions related to the artificial intelligence model, and one or more directed edges in the reversed computation graph represent one or more dependencies between the functions. The reversed computation graph is partitioned into sequential partitions, such that the partitions are executed sequentially and functions corresponding to nodes in each partition are executed in parallel.Type: GrantFiled: February 12, 2020Date of Patent: February 14, 2023Assignee: EMC IP Holding Company LLCInventors: Jin Li, Jinpeng Liu, Christopher S. MacLellan
-
Patent number: 11573833Abstract: Allocating CPU cores to a thread running in a system that supports multiple concurrent threads includes training a first model to optimize core allocations to threads using training data that includes performance data, initially allocating cores to threads based on the first model, and adjusting core allocations to threads based on a second model that uses run time data and run time performance measurements. The system may be a storage system. The training data may include I/O workload data obtained at customer sites. The I/O workload data may include data about I/O rates, thread execution times, system response times, and Logical Block Addresses. The training data may include data from a site that is expected to run the second model. The first model may categorize storage system workloads and determine core allocations for different categories of workloads. Initially allocating cores to threads may include using information from the first model.Type: GrantFiled: July 31, 2019Date of Patent: February 7, 2023Assignee: EMC IP Holding Company LLCInventors: Jon I. Krasner, Edward P. Goodwin
-
Patent number: 11556386Abstract: Resource allocation problems involve identification of resource, selection by certain criteria and offering of resources to the requester. Identification of required resources may involve matching the type of resource, selecting based on user requirements and policy criteria, and offering the resource through an assignment system. An apparatus and a method are provided that enable identification and selection of resources. The method includes receiving a resource allocation request for the allocation of a resource, the resource allocation request specifying a set of user requirements. The method includes receiving an operator policy associated with the resource, the operator policy including one or more policy requirements. The method includes synthesizing a resource request based on the resource allocation request and the operator policy.Type: GrantFiled: September 18, 2017Date of Patent: January 17, 2023Assignee: Telefonaktiebolaget LM Ericsson (publ)Inventors: Mahesh Babu Jayaraman, Ganapathy Raman Madanagopal, Ashis Kumar Roy
-
Determining and implementing recovery actions for containers to recover the containers from failures
Patent number: 11544091Abstract: A system may include a registration module to register the system with a server cluster and a resource collector module operatively connected to the registration module, the resource collector module to identify a list of resources for a container running on the server cluster. The system may also include a resource monitor module operatively connected to the resource collector module, the resource collector module to receive the list of resources for the container, monitor a resource in the list of resources for the container, and generate an event for the container and an event manager module operatively connected to the resource monitor module, the event manager to receive the event and determine a recovery action for the container.Type: GrantFiled: July 8, 2019Date of Patent: January 3, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: K. Kanakeshan, Vaishnavi Rajagopal, Paresh Sao -
Patent number: 11537518Abstract: Constraining memory use for overlapping virtual memory operations is described. The memory use is constrained to prevent memory from exceeding an operational threshold, e.g., in relation to operations for modifying content. These operations are implemented according to algorithms having a plurality of instructions. Before the instructions are performed in relation to the content, virtual memory is allocated to the content data, which is then loaded into the virtual memory and is also partitioned into data portions. In the context of the described techniques, at least one of the instructions affects multiple portions of the content data loaded in virtual memory. When this occurs, the instruction is carried out, in part, by transferring the multiple portions of content data between the virtual memory and a memory such that a number of portions of the content data in the memory is constrained to the memory that is reserved for the operation.Type: GrantFiled: September 26, 2017Date of Patent: December 27, 2022Assignee: Adobe Inc.Inventors: Chih-Yao Hsieh, Zhaowen Wang
-
Patent number: 11537430Abstract: A wait optimizer circuit can be coupled to a processor to monitor an entry of a virtual CPU (vCPU) into a wait mode to acquire a ticket lock. The wait optimizer can introduce an amount of delay, while the vCPU is in the wait mode, with an assumption that the spinlock may be resolved before sending a wake up signal to the processor for rescheduling. The wait optimizer can also record a time stamp only for a first entry of the vCPU from a plurality of entries into the wait mode within a window of time. The time stamps for vCPUs contending for the same ticket lock can be used by a hypervisor executing on the processor for rescheduling the vCPUs.Type: GrantFiled: February 6, 2020Date of Patent: December 27, 2022Assignee: Amazon Technologies, Inc.Inventor: Ali Ghassan Saidi
-
Patent number: 11537446Abstract: This document relates to orchestration and scheduling of services. One example method involves obtaining dependency information for an application. The dependency information can represent data dependencies between individual services of the application. The example method can also involve identifying runtime characteristics of the individual services and performing automated orchestration of the individual services into one or more application processes based at least on the dependency information and the runtime characteristics.Type: GrantFiled: August 14, 2019Date of Patent: December 27, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Robert Lovejoy Goodwin, Janaina Barreiro Gambaro Bueno, Sitaramaswamy V. Lanka, Javier Garcia Flynn, Pedram Faghihi Rezaei, Karthik Pattabiraman
-
Patent number: 11494214Abstract: At a virtualization host, an isolated run-time environment is established within a compute instance. The configuration of the isolated run-time environment is analyzed by a security manager of the hypervisor of the host. After the analysis, computations are performed at the isolated run-time environment.Type: GrantFiled: March 28, 2019Date of Patent: November 8, 2022Assignee: Amazon Technologies, Inc.Inventors: Anthony Nicholas Liguori, Eric Jason Brandwine, Matthew Shawn Wilson
-
Patent number: 11487573Abstract: Methods and systems for automating execution of a workflow by integrating security applications of a distributed system into the workflow are provided. In embodiments, a system includes an application server in a first cloud, configured to receive a trigger to execute the workflow. The workflow includes tasks to be executed in a device of a second cloud. The application server sends a request to process the task to a task queue module. The task queue module places the task request in a queue, and a worker hosted in the device of the second cloud retrieves the task request from the queue and processes the task request by invoking a plugin. The plugin interacts with a security application of the device of the second cloud to execute the task, which yields task results. The task results are provided to the application server, via the worker and the task queue module.Type: GrantFiled: May 7, 2019Date of Patent: November 1, 2022Assignee: Thomson Reuters Enterprise Centre GmbHInventors: Vishal Dilipkumar Parikh, William Stuart Ratner, Akshar Rawal
-
Patent number: 11474871Abstract: The embodiments herein describe a virtualization framework for cache coherent accelerators where the framework incorporates a layered approach for accelerators in their interactions between a cache coherent protocol layer and the functions performed by the accelerator. In one embodiment, the virtualization framework includes a first layer containing the different instances of accelerator functions (AFs), a second layer containing accelerator function engines (AFE) in each of the AFs, and a third layer containing accelerator function threads (AFTs) in each of the AFEs. Partitioning the hardware circuitry using multiple layers in the virtualization framework allows the accelerator to be quickly re-provisioned in response to requests made by guest operation systems or virtual machines executing in a host. Further, using the layers to partition the hardware permits the host to re-provision sub-portions of the accelerator while the remaining portions of the accelerator continue to operate as normal.Type: GrantFiled: September 25, 2019Date of Patent: October 18, 2022Assignee: XILINX, INC.Inventors: Millind Mittal, Jaideep Dastidar
-
Patent number: 11461144Abstract: Method by which a plurality of processes are assigned to a plurality of computational resources, each computational resource providing resource capacities in a plurality of processing dimensions. Processing loads are associated in each processing dimension with each process. A loading metric is associated with each process based on the processing loads in each processing dimension. One or more undesignated computational resources are designated from the plurality of computational resources to host unassigned processes from the plurality of processes. In descending order of the loading metric one unassigned process is assigned from the plurality of processes to each one of the one or more designated computational resources. In ascending order of the loading metric any remaining unassigned processes are assigned from the plurality of processes to the one or more designated computational resources whilst there remains sufficient resource capacity in each of the plurality of processing dimensions.Type: GrantFiled: October 21, 2015Date of Patent: October 4, 2022Assignee: Hewlett Packard Enterprise Development LPInventor: Chris Tofts
-
Patent number: 11461120Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for rack nesting in virtualized server systems. An example apparatus includes a resource discoverer to identify resources to be allocated to the nested rack based on a policy indicative of one or more physical racks from which to identify the resources, and determine candidate resources from the resources to be allocated to the nested rack based on a capacity parameter indicative of a quantity of the resources available to be allocated to the nested rack, the candidate resources to have first hypervisors, and a nested rack controller to generate the nested rack by deploying second hypervisors on the first hypervisors, the second hypervisors to facilitate communication between the candidate resources and one or more virtual machines on the second hypervisors, the nested rack to execute one or more computing tasks based on the communication.Type: GrantFiled: May 28, 2019Date of Patent: October 4, 2022Assignee: VMWARE, INC.Inventors: Shubham Verma, Ravi Kumar Reddy Kottapalli, Samdeep Nayak, Kannan Balasubramanian, Suket Gakhar
-
Patent number: 11429450Abstract: Disclosed are various embodiments for assigning compute kernels to compute accelerators that form an aggregated virtualized compute accelerator. A directed, acyclic graph (DAG) representing a workload assigned to a virtualized compute accelerator is generated. The workload can include a plurality of compute kernels and the DAG comprising a plurality of nodes and a plurality of edges, each of the nodes representing a respective compute kernel, each edge representing a dependency between a respective pair of the compute kernels, and the virtualized compute accelerator representing a logical interface for a plurality of compute accelerators. The DAG can be analyzed to identify sets of dependent compute kernels, each set of dependent compute kernels being independent of the other sets of dependent compute kernels and execution of at least one compute kernel in a set of dependent compute kernels depending on a previous execution of another computer kernel in the set of dependent compute kernels.Type: GrantFiled: April 25, 2019Date of Patent: August 30, 2022Assignee: VMWARE, INC.Inventor: Matthew D. McClure
-
Patent number: 11416306Abstract: Techniques for managing resource utilization across heterogeneous physical hosts are described. Resource utilization of a first plurality of physical hosts in a provider network may be monitored, each physical host comprising a plurality of resources. A future resource utilization can be determined, the future resource utilization including quantities of a plurality of resource types. The future resource utilization can be matched to a plurality of physical host types, each physical host type associated with a different plurality of resources. A second plurality of physical hosts corresponding to the plurality of physical host types can be deployed to the provider network.Type: GrantFiled: June 13, 2018Date of Patent: August 16, 2022Assignee: Amazon Technologies, Inc.Inventors: Bradley Joseph Gussin, Diwakar Gupta, Michael Phillip Quinn
-
Patent number: 11360814Abstract: A server and a method for executing an application are provided. The method includes receiving code associated with an application uploaded from a terminal device, transmitting, to a service server, code information associated with the application, receiving, from the service server, execution information for executing the application acquired based on the code information, defining the accelerated computing environment for executing the application based on the received execution information, and executing the application in the compiled accelerated computing environment.Type: GrantFiled: June 10, 2019Date of Patent: June 14, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Taejeong Kim, Kyunam Cho
-
Patent number: 11360795Abstract: Techniques for an optimization service of a service provider network to help optimize the selection, configuration, and utilization, of virtual machine (VM) instance types to support workloads on behalf of users. The optimization service may implement the techniques described herein at various stages in a life cycle of a workload to help optimize the performance of the workload, and reduce underutilization of computing resources. For example, the optimization service may perform techniques to help new users select an optimized VM instance type on which to initially launch their workload. Further, the optimization service may monitor a workload for the life of the workload, and determine new VM instance types, and/or configuration modifications, that optimize the performance of the workload. The optimization service may provide recommendations to users that help improve performance of their workloads, and that also increase the aggregate utilization of computing resources of the service provider network.Type: GrantFiled: March 28, 2019Date of Patent: June 14, 2022Assignee: Amazon Technologies, Inc.Inventors: Malcolm Featonby, Leslie Johann Lamprecht, John Merrill Phillips, Umesh Chandani, Roberto Pentz De Faria, Hou Liu, Ladan Mahabadi, Letian Feng
-
Patent number: 11347561Abstract: Core to resource and resource to core mapping is disclosed. In an embodiment, a method includes obtaining an input pattern including a plurality of resource identifiers corresponding to resources. The method further includes applying the input pattern to a guaranteed regular and uniform distribution process to obtain a distribution pattern that indicates a distribution of resources across cores or a distribution of the cores across the resources. The method further includes distributing the resources across the cores or distributing the cores across the resources according to the distribution pattern.Type: GrantFiled: June 22, 2018Date of Patent: May 31, 2022Assignee: VMWARE, INC.Inventors: Raju Kumar, Sreeram Iyer
-
Patent number: 11334389Abstract: The latency corresponding to a latency-sensitive event-based processor is evaluated to determine whether the latency-sensitive event-based processor (EBP) should be prioritized. If so, constraints on the number of events that the latency-sensitive EBP can process are relaxed and the frequency with which the latency-sensitive EBP can process events is increased. At a next latency evaluation, if the latency-sensitive EBP no longer meets criteria for prioritization, the constraint on the number of events is returned to a nominal level, as is the frequency with which the latency-sensitive EBP can process events.Type: GrantFiled: October 30, 2019Date of Patent: May 17, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Priyadarshi Ghosh, Anand Patil, Vishnu Kumar, Aparajita