Abstract: Managing updates to executable programming code on a computer system in a computer network. A maintenance service utility is configured to launch a maintenance procedure at a specified time during operation of the computer system. Operation of a maintenance timer utility is activated during startup of the computer system to track and monitor the amount of time the computer system has been operating since startup. The maintenance service utility determines if there any updates to the executable programming code that require installation. The maintenance procedure is launched after a specified time if there are updates to the executable programming code. The computer system is automatically rebooted to install the updates to the executable programming code. A maintenance service editor utility enables the maintenance service utility to be configured to launch the maintenance procedure after a specified time if there are updates to the executable programming code.
Abstract: A processor is described that includes a processing core and a plurality of counters for the processing core. The plurality of counters are to count a first value and a second value for each of multiple threads supported by the processing core. The first value reflects a number of cycles at which a non sleep state has been requested for the first value's corresponding thread, and, a second value that reflects a number of cycles at which a non sleep state and a highest performance state has been requested for the second value's corresponding thread. The first value's corresponding thread and the second value's corresponding thread being a same thread.
Type:
Grant
Filed:
September 28, 2012
Date of Patent:
September 22, 2015
Assignee:
Intel Corporation
Inventors:
Malini K. Bhandaru, Matthew M. Bace, A Leonard Brown, Ian M. Steiner, Vivek Garg, Eric Dehaemer, Scott P. Bobholz
Abstract: Logic (also called “synchronizing logic”) in a co-processor (that provides an interface to memory) receives a signal (called a “declaration”) from each of a number of tasks, based on an initial determination of one or more paths (also called “code paths”) in an instruction stream (e.g. originating from a high-level software program or from low-level microcode) that a task is likely to follow. Once a task (also called “disabled” task) declares its lack of a future need to access a shared data, the synchronizing logic allows that shared data to be accessed by other tasks (also called “needy” tasks) that have indicated their need to access the same. Moreover, the synchronizing logic also allows the shared data to be accessed by the other needy tasks on completion of access of the shared data by a current task (assuming the current task was also a needy task).
Abstract: An atomic transaction includes one or more memory access operations that are completed atomically. A Best-Effort Transaction (BET) system makes its best effort to complete each atomic transaction without guaranteeing completion of all atomic transactions. When an atomic transaction is aborted, BET may provide software with appropriate runtime information such as cause of the abortion. With proper coherence layer enhancements, BET can be implemented efficiently for multiprocessor systems, using caches as buffers for data accessed by atomic transactions. Furthermore, with appropriate fairness support, forward progress can be guaranteed for atomic transactions that incur no buffer overflow.
Type:
Grant
Filed:
January 3, 2008
Date of Patent:
September 22, 2015
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Abstract: The disclosed embodiments relate to a system that displays performance data for a computing environment. During operation, the system first determines values for a performance metric for a plurality of entities that comprise the computing environment. Next, the system displays the computing environment as a set of nodes representing the plurality of entities. While displaying the nodes, the system displays a chart with a line illustrating how a value of the performance metric for the selected node varies over time, wherein the line is displayed against a background illustrating how a distribution of the performance metric for a reference subset of the set of nodes varies over time.
Abstract: A Datacenter Management Service (DMS) is provided as a platform designed to automate datacenter management tasks that are performed across multiple technology silos and datacenter servers or collections of servers. The infrastructure to perform the automation is provided by integrating heterogeneous task providers and implementations into a set of standardized adapters through dependency inversion. A platform automating datacenter management tasks may include three main components: integration of adapters into an interface allowing a common interface for datacenter task execution, an execution platform that works against the adapters, and implementation of the adapters for a given type of datacenter management task.
Type:
Grant
Filed:
January 24, 2014
Date of Patent:
September 22, 2015
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Guruprakash Rao, Jacob D. Sink, Joshua Martin
Abstract: An apparatus, method, system, and computer-readable medium are disclosed. In one embodiment the apparatus is a processor. The processor includes thread remapping logic that is capable of tracking hardware thread interrupt equivalence information for a first hardware thread and a second hardware thread. The processor also includes logic to receive an interrupt issued from a device, wherein the interrupt has an affinity tied to the first hardware thread. The processor also includes logic to redirect the interrupt to the second hardware thread when the hardware thread interrupt equivalence information validates the second hardware thread is capable of handling the interrupt.
Abstract: A device receives a command to initiate parallel processing. The command includes an indication of a function that is to be performed in connection with the parallel processing, and a reference to a multidimensional array to which the function is to be applied. The multidimensional array includes at least three dimensions. The command also includes an indication of one or more dimensions by which the multidimensional array is to be partitioned. The device partitions the multidimensional array, along the one or more dimensions, to divide the multidimensional array into multiple blocks, each of the multiple blocks representing a subset of the multidimensional array. The device controls application of the function to the multiple blocks to cause the function to be applied in parallel to at least two blocks of the multiple blocks.
Abstract: To provision a secure customer domain in a virtualized multi-tenant environment, a virtual machine (VM) is configured for a customer in the customer domain. A first, second, and third virtual network interfaces (VNICs) are configured in the VM. The first VNIC has a first network address within a first address range selected for the customer domain and enables an application on the VM to access a second application in a second VM in the customer domain. The second VNIC enables a third application outside the customer domain to access the VM in the customer domain. The second VNIC is configured to use an addressing specification used by a server of the third application. The third VNIC enables access from the first application to a fourth application executing outside the customer domain. The third VNIC is configured to use an addressing specification used by a server of the fourth application.
Type:
Grant
Filed:
June 13, 2013
Date of Patent:
September 15, 2015
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Sean Donnellan, Robert K. Floyd, III, Robert P. Monaco, Holger Mueller, Joseph D. Robinson
Abstract: Approaches for ensuring the privacy of a controller of a device from a host operating system. A host operating system is prevented from inspecting or modifying data received by a controller of a hardware device. Control of the controller is withdrawn from the host operating system and granted to a hypervisor. A replacement controller for the hardware device is provided to the host operating system. Upon the hypervisor receiving data via the controller, the hypervisor forwards the data to a relevant virtual machine responsible for processing the data. Although the host operating system may operate as if it possessed control of the controller of the hardware device, any malicious code inadvertently residing within the host operating system will be unable to inspect or modify any data received by or sent from the actual controller of the hardware device.
Abstract: An apparatus and an article of manufacture for creating a virtual machine super template to create a user-requested virtual machine template include identifying at least one virtual machine super template to be created via analyzing at least one existing template in a repository and/or a user-defined combination of software, creating the super template by installing software requested by the user to be within the super template, and creating a user-requested virtual machine template by un-installing software from the super template that is not required in the user-requested template and/or adding software to the super template required in the user-requested template that is not present in the super template.
Type:
Grant
Filed:
May 29, 2012
Date of Patent:
September 15, 2015
Assignee:
International Business Machines Corporation
Inventors:
Pradipta De, Manish Gupta, Deepak K. Jeswani
Abstract: A method and system for scheduling a time critical task. The system may include a processing unit, a hardware assist scheduler, and a memory coupled to both the processing unit and the hardware assist scheduler. The method may include receiving timing information for executing the time critical task, the time critical task executing program instructions via a thread on a core of a processing unit and scheduling the time critical task based on the received timing information. The method may further include programming a lateness timer, waiting for a wakeup time to obtain and notifying the processing unit of the scheduling. Additionally, the method may include executing, on the core of the processing unit, the time critical task in accordance with the scheduling, monitoring the lateness timer, and asserting a thread execution interrupt in response to the lateness timer expiring, thereby suspending execution of the time critical task.
Type:
Grant
Filed:
April 9, 2013
Date of Patent:
September 15, 2015
Assignee:
National Instruments Corporation
Inventors:
Sundeep Chandhoke, Herbert K. Salmon, IV
Abstract: Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments.
Abstract: A method for integrating responses to asynchronous events is provided. A hypervisor of a host receives a request from a network manager to re-direct asynchronous events from a guest to an address of an event aggregation manager distinct from an address of the network manager. The hypervisor receives an asynchronous event having a destination address of the network manager from the guest. The hypervisor maps the destination address of the network manager to the address of the event aggregation manager. The hypervisor transmits the asynchronous event to the event aggregation manager.
Abstract: Reducing an amount of memory used by a virtual machine. A system includes multiple virtual machines that share common pages of memory. The number of private pages associated with each virtual machine is minimized by ensuring that pages that a guest operating system regards as now free or zeroed are efficiently mapped by the hypervisor to a shared zero page. Upon a hypervisor determining that one or more guest physical frame numbers are assigned to free memory pages, the hypervisor updates mapping data to map the one or more guest physical frame numbers to a shared zero page within the machine frame.
Type:
Grant
Filed:
May 10, 2012
Date of Patent:
September 15, 2015
Assignee:
Bromium, Inc.
Inventors:
Krzysztof Uchronski, Martin O'Brien, Jacob Gorm Hansen, Kiran Bondalapati, Ian Pratt, Gaurav Banga, Vikram Kapoor
Abstract: Methods and systems are provided for graphics processing unit optimization via wavefront reforming including queuing one or more work-items of a wavefront into a plurality of queues of a compute unit. Each queue is associated with a particular processor within the compute unit. A plurality of work passes are performed. A determination is made which of the plurality of queues are below a threshold amount of work-items. Remaining one or more work-items from the queues with remaining ones of the work-items are redistributed to the below threshold queues. A subsequent work pass is performed. The, repeating of the determining, redistributing, and performing the subsequent work pass is done until all the queues are empty.
Type:
Grant
Filed:
March 16, 2012
Date of Patent:
September 15, 2015
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Michael L. Schmit, Radhakrishna Giduthuri
Abstract: The invention relates to a method for generating, on the fly and on demand, at least one virtual network (402, 404, 406), adapted for a specific use, on a physical network (200), referred to as an infrastructure network, including physical nodes (204, 206, 208, 210), each of said physical nodes (204, 206, 208, 210) runs at least one network operating system, said method including the following steps: determining, on at least one computer device (202), referred to as a virtual network server, data related to said virtual network (402) to be generated in accordance with said particular use; transmitting, on the basis of said data and to at least a portion of said physical nodes (204, 206, 208, 210), referred to as active nodes, of said infrastructure network (200), a request for creating a virtual node; and creating a virtual node on each of said active nodes by installing a virtual device in each of said active nodes, said virtual network consisting of all of said virtual nodes thus created.
Type:
Grant
Filed:
July 16, 2010
Date of Patent:
September 15, 2015
Assignees:
Universite Pierre et Marie Curie (Paris 6), Centre National De La Recherche Scientifique
Abstract: A virtual disk image manager running on a computing device determines that an operation is to be performed on a virtual disk image. The virtual disk image manager then determines whether an underlying storage domain on which the virtual disk image is stored supports the operation. In response to determining that the storage domain supports the operation, the virtual disk image manager uses native capabilities of the storage domain to perform the operation. In response to determining that the storage domain does not support the operation, the virtual disk image manager performs the operation without the use of the storage domains native capabilities.
Abstract: Estimating required resources to support a specific number of users in a virtually provisioned environment is described. Servers are identified that support application operations associated with executing an application, based on a configuration file. A count of each type of application operation currently executing and a resource utilization associated with each of the servers are recorded. A set of linear equations is created and solved to estimate resource costs of each individual type of application operation and ultimately to calculate required resource costs to support the projected number of concurrent users.
Type:
Grant
Filed:
March 5, 2013
Date of Patent:
September 8, 2015
Assignee:
EMC CORPORATION
Inventors:
Dmitry Volchegursky, Dmitry Limonov, Boris Shpilyuck, Alex Rankov
Abstract: In a computer system having virtual machines running therein, a hypervisor that supports execution of the virtual machines allocates blocks of storage to the virtual machines from a thinly provisioned logical block device. When the hypervisor deletes a file or receives commands to delete a file, the hypervisor moves the file into a delete directory. An unmap thread running in the background issues unmap commands to the storage device to release one or more blocks of the logical block device that are allocated to the files in the delete directory, so that the unmap operation can be executed asynchronously with respect to the file delete event.