Abstract: In one embodiment, a system includes a cluster of host machines implementing a virtualization environment. Each host machine a hypervisor, a user virtual machine (UVM), a connection manager, an I/O controller, and a virtual disk. The virtual environment may include storage devices and may be accessible by all of the I/O controllers. A management module of the virtualization environment may display a graphical user interface that includes an alert rule configuration interface, which may be operable to configure one or more alert policies each associated with an operating status of a component of the virtualization environment. The management module may receive inputs associated with the alert policies. The management module may update the alert policies in accordance with the inputs.
Abstract: A multi-layer compute sizing correction stack may generate prescriptive compute sizing correction tokens for controlling sizing adjustments for computing resources. The input layer of the compute sizing correction stack may generate cleansed utilization data based on historical utilization data received via network connection. The input layer may receive one or more resource configurations that may be applied to implement the sizing correction. A prescriptive engine layer may generate a compute sizing correction trajectory indicative of a sizing adjustment to a computing resource. The compute sizing correction trajectory may account of historic processor, network, and memory utilization. Based on the compute sizing correction trajectory and a selected resource configuration, the prescriptive engine layer may generate the compute sizing correction tokens that that may be used to control compute sizing adjustments prescriptively.
Abstract: In some implementations, a method includes receiving, from a virtual machine, a request to create a proxy agent configured to monitor an application executing on the virtual machine, wherein the proxy agent is associated with the virtual machine and wherein the virtual machine is unable to host the proxy agent. The method also includes creating the proxy agent based on the request to create the proxy agent. The method further includes receiving monitoring data for the application executing on the virtual machine via the proxy agent. The method further includes transmitting a status of the application or the monitoring data to a server.
Type:
Grant
Filed:
April 22, 2015
Date of Patent:
October 8, 2019
Assignee:
CISCO TECHNOLOGY, INC.
Inventors:
Shaheedur Reza Haque, Matthew Turner, Vanson Lim
Abstract: Operating a hypervisor includes running a hypervisor as a thread of an underlying operating system and loading a guest operating system using the hypervisor based on the thread of the underlying operating system, where the hypervisor runs independently of the guest operating system and independently of other hypervisors running as other threads of the underlying operating system. The hypervisor may be a first hypervisor and operating a hypervisor may further include running a second hypervisor nested with the first hypervisor, where the guest operating system may be loaded using both the first hypervisor and the second hypervisor. The underlying operating system may be an operating system of a storage system.
Type:
Grant
Filed:
June 2, 2015
Date of Patent:
October 8, 2019
Assignee:
EMC IP Holding Company LLC
Inventors:
Steven R. Chalmer, Matthew H. Fredette, Steven T. McClure, Uresh K. Vahalia
Abstract: Various embodiments are generally directed to an apparatus, method, and other techniques to handle interrupts directed to secure virtual machines. Work is added to a work queue in a shared memory buffer in accordance with a received request, and a task-priority register is updated to block interrupts not directed toward the secure virtual machine. A timer that expires after a number of cycles of the computer processor have elapsed is started. The secure virtual machine is launched on the computer processor, and a work queue in a shared memory buffer is polled for work to be executed by the secure virtual machine until the work queue is empty or until the timer expires.
Abstract: A migration system includes a memory, a physical processor, first and second hypervisors, first and second virtual machines, and first and second networking devices. The first hypervisor is located at a migration source location and the second hypervisor is located at a migration destination location. The first virtual machine includes a guest OS which includes a first agent. The second virtual machine includes the guest OS which includes a second agent. The first hypervisor is configured to request the guest OS executing on the first hypervisor to copy a configuration of the first networking device and to store the configuration in a place-holder networking device. The second hypervisor is configured to start the second virtual machine at a destination location, request the guest OS executing on the second virtual machine to copy the configuration from the place-holder networking device and to store the configuration in the second networking device.
Abstract: Generally discussed herein are techniques, software, apparatuses, and systems configured for managing a navigation stack of an application including multiple primary user interfaces (UIs). In one or more embodiments, a method can include providing data to multiple primary UIs that causes each of the multiple primary UIs to present a view of a plurality of views of an application state of the software application, receiving data indicating the application state of the application has changed, and pushing a workflow activity of the application onto a navigation stack, wherein each workflow activity includes data corresponding to a configuration of a view model module and a list of views associated with the configuration, the view model module provides the data that causes the plurality of views to be presented on the multiple primary UIs in response to the configuration being loaded on in the view model module.
Type:
Grant
Filed:
January 7, 2016
Date of Patent:
September 3, 2019
Assignee:
Hand Held Products, Inc.
Inventors:
Jeffrey Pike, Shawn Zabel, Brian Bender, Dennis Doubleday, Mark David Murawski
Abstract: A system for processing media on a resource restricted device, the system including a memory to store data representing media assets and associated descriptors, and program instructions representing an application and a media processing system, and a processor to execute the program instructions, wherein the program instructions represent the media processing system, in response to a call from an application defining a plurality of services to be performed on an asset, determine a tiered schedule of processing operations to be performed upon the asset based on a processing budget associated therewith, and iteratively execute the processing operations on a tier-by-tier basis, unless interrupted.
Type:
Grant
Filed:
June 3, 2016
Date of Patent:
September 3, 2019
Assignee:
Apple Inc.
Inventors:
Albert Keinath, Ke Zhang, Yunfei Zheng, Shujie Liu, Jiefu Zhai, Chris Y. Chung, Xiaosong Zhou, Hsi-Jung Wu
Abstract: Provided is a process, including: obtaining a task tree; traversing the task tree to obtain an unordered set of tasks and an ordered list of tasks; adding the unordered set of tasks to at least some of a plurality of queues of tasks; adding the ordered list of tasks to at least some of the plurality of queues of tasks; and receiving a first task request from a first worker process in a concurrent processing application and, in response to the first task request: accessing a first queue from among the plurality of queues, determining that the first queue is not locked, accessing a first task in the first queue in response to the first task being a next task in the first queue, determining that the first task is a member of a sequence of tasks specified by the ordered list and, in response, locking the first queue, and assigning the first task to the first worker process.
Abstract: A virtual trusted platform module function implementation method is provided, the method is executed at an exception level EL3 of a processor that uses an ARM V8 architecture, and the method includes: generating, according to requirements of one or more VMs, one or more vTPM instances corresponding to each VM, and storing the generated one or more vTPM instances in preset secure space, where each vTPM instance has a dedicated instance communication queue for a VM corresponding to itself to use, and a physical address is allocated to each instance communication queue; and interacting with a VMM and the VM, so that the VM acquires a VM communication queue virtual address, in VM virtual address space, corresponding to a communication queue physical address of the vTPM instance, and the VM communicates with a vTPM instance communication queue by using the VM communication queue virtual address.
Abstract: Systems and methods are disclosed for managing resources associated with cluster-based resource pool(s). According to illustrative implementations, innovations herein may include or involve one or more of best fit algorithms, infrastructure based service provision, tolerance and/or ghost processing features, dynamic management service having monitoring and/or decision process features, as well as virtual machine and resource distribution features.
Type:
Grant
Filed:
November 21, 2016
Date of Patent:
June 25, 2019
Assignee:
Virtustream IP Holding Company LLC
Inventors:
Vincent G. Lubsey, Kevin D. Reid, Karl J. Simpson, Rodney John Rogers
Abstract: A consistent user interface is provided in a virtualized environment. A first and second application are executed within first and second operating systems running within separate virtual machines upon the same device. A first application receives, from the second application, a request that identifies a particular type of text to be received from a user. The first application selects an associated text input type and displays a text input interface on the device in a configuration allowing text in the selected text input type to be submitted. Optionally, the first virtual machine may have exclusive permission to display a user interface on the device; however, the user interface may include elements whose appearance was determined within other virtual machines.
Abstract: Systems and methods for containerized IT intelligence and management. In one embodiment, a system for containerized IT financial management comprises at least one collector, at least one meter, at least one connector, and a reporting dashboard. The at least one collector is customized and connected to at least one container platform. The at least one collector sends capacity and consumption metrics to the at least one meter for processing and aggregation.
Type:
Grant
Filed:
July 18, 2018
Date of Patent:
May 28, 2019
Assignee:
6FUSION USA, INC.
Inventors:
Delano Seymour, Douglas Steele, John Cowan
Abstract: A platform capacity tool includes a retrieval engine, a capacity consumption engine, and a workload projection engine. The platform capacity tool determines whether there is sufficient memory, processor, and/or network resources to execute an application. The platform capacity tool makes these determinations based on process capacity consumptions and/or application capacity consumptions.
Abstract: Aspects of the present invention provide an approach that evaluates a locally running image (e.g., such as that for a virtual machine (VM)) and determines if that image could run more efficiently and/or more effectively in an alternate computing environment (e.g., a cloud computing environment). Specifically, embodiments of the present invention evaluate the local (existing/target) image's actual and perceived performance, as well as the anticipated/potential performance if the image were to be migrated to an alternate environment. The anticipated/potential performance can be measured based on another image that is similar to the existing/target image but where that image is running in a different computing environment. Regardless, the system would display a recommendation to the end user if it were determined that the image could perform better in the alternate environment (or vice versa).
Type:
Grant
Filed:
April 14, 2016
Date of Patent:
April 16, 2019
Assignee:
SERVICENOW, INC.
Inventors:
Kulvir S. Bhogal, Gregory J. Boss, Nitin Gaur, Andrew R. Jones
Abstract: A system and method of allocating resources among cores in a multi-core system is disclosed. The system and method determine cores that are able to process tasks to be performed, and use history of usage information to select a core to process the tasks. The system may be a heterogeneous multi-core processing system, and may include a system on chip (SoC).
Type:
Grant
Filed:
October 24, 2013
Date of Patent:
April 9, 2019
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Ki Soo Yu, Kyung Il Sun, Chang Hwan Youn
Abstract: Systems and methods for caching data from a plurality of virtual machines may comprise detecting, using a computer processor executing cache management software, initiation of migration of a cached virtual machine from a first virtualization platform to a second virtualization platform, disabling caching for the virtual machine on the first virtualization platform, detecting completion of the migration of the virtual machine to the second virtualization platform, and enabling caching for the virtual machine on the second virtualization platform.
Abstract: A system, methods, and apparatus for using hypervisor trapping for protection against interrupts in virtual machine functions are disclosed. A system includes memory, one or more physical processors, a virtual machine executing on the one or more physical processors, and a hypervisor executing on the one or more physical processors. The hypervisor reads an interrupt data structure on the virtual machine. The hypervisor determines whether the interrupt data structure points to an alternate page view. Responsive to determining that the interrupt data structure points to an alternate page view, the hypervisor disables a virtual machine function.
Abstract: Systems and methods are provided for scheduling homogeneous workloads including batch jobs, and heterogeneous workloads including batch and dedicated jobs, with run-time elasticity wherein resource requirements for a given job can change during run-time execution of the job.
Type:
Grant
Filed:
January 30, 2017
Date of Patent:
March 5, 2019
Assignee:
International Business Machines Corporation
Inventors:
Hani T. Jamjoom, Dinesh Kumar, Zon-Yin Shae
Abstract: Systems and methods are provided for scheduling homogeneous workloads including batch jobs, and heterogeneous workloads including batch and dedicated jobs, with run-time elasticity wherein resource requirements for a given job can change during run-time execution of the job.
Type:
Grant
Filed:
January 30, 2017
Date of Patent:
March 5, 2019
Assignee:
International Business Machines Corporation
Inventors:
Hani T. Jamjoom, Dinesh Kumar, Zon-Yin Shae