Virtual Machine Memory Addressing Patents (Class 711/6)
  • Publication number: 20120124269
    Abstract: A kernel of the operating system reorganizes a plurality of memory units into a plurality of virtual nodes in a virtual non-uniform memory access architecture in response to receiving a configuration of the plurality of memory units from a firmware. A subsystem of the operating system determines an order of allocation of the plurality of virtual nodes calculated to maintain a maximum number of the plurality of memory units devoid of references. The memory controller transitions one or more memory units into a lower power state in response to the one or more memory units being devoid of one or more references for the period of time.
    Type: Application
    Filed: November 15, 2010
    Publication date: May 17, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ankita Garg, Balbir Singh, Vaidyanathan Srinivasan
  • Publication number: 20120117301
    Abstract: Various methods and apparatus are described for communicating transactions between one or more initiator IP cores and one or more target IP cores coupled to an interconnect. A centralized Memory Management logic Unit (MMU) is located in the interconnect for virtualization and sharing of integrated circuit resources including target cores between the one or more initiator IP cores. A master translation look aside buffer (TLB) stores virtualization and sharing information in the entries of the master TLB. A set of two or more translation look aside buffers (TLBs) locally store virtualization and sharing information replicated from the master TLB. Logic in the MMU or other software updates the virtualization and sharing information replicated from the master TLB in the entries of one or more of the set of local TLBs.
    Type: Application
    Filed: November 3, 2011
    Publication date: May 10, 2012
    Applicant: SONICS, INC.
    Inventor: Drew E. Wingard
  • Publication number: 20120117299
    Abstract: Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced.
    Type: Application
    Filed: November 9, 2010
    Publication date: May 10, 2012
    Applicant: VMWARE, INC.
    Inventors: Carl A. WALDSPURGER, Rajesh VENKATASUBRAMANIAN, Alexander Thomas GARTHWAITE, Yury BASKAKOV, Puneet ZAROO
  • Publication number: 20120117298
    Abstract: A method and system manages memory in a network of virtual machines, including a copy of a master virtual machine (VM) memory system, the copy accessible to a memory server. The method includes determining whether a memory page requested by a clone VM memory system is fetchable from the memory server, the clone VM memory system hosted in a host memory system; if the memory page is fetchable from the memory server, fetching the memory page from the memory server; determining whether there is sufficient space in the host memory system to load the memory page; if there is insufficient space in the host memory system, evicting a selected memory page from the host memory system; and loading the memory page into the host memory system and the clone VM memory system.
    Type: Application
    Filed: November 9, 2010
    Publication date: May 10, 2012
    Applicant: GridCentic Inc.
    Inventors: Adin Scannell, Timothy Smith, Vivek Lakshmanan, David Scannell, Kannan Vijayan, Jing Su
  • Publication number: 20120117300
    Abstract: One embodiment of the present invention is a technique to invalidate entries in a translation lookaside buffer (TLB). A TLB in a processor has a plurality of TLB entries. Each TLB entry is associated with a virtual machine extension (VMX) tag word indicating if the associated TLB entry is invalidated according to a processor mode when an invalidation operation is performed. The processor mode is one of execution in a virtual machine (VM) and execution not in a virtual machine.
    Type: Application
    Filed: December 2, 2010
    Publication date: May 10, 2012
    Inventors: Erik C. Cota-Robles, Andy Glew, Stalinselvaraj Jeyasingh, Alain Kagi, Michael A. Kozuch, Gilbert Neiger, Richard Uhlig
  • Patent number: 8176280
    Abstract: Management of storage used by pageable guests of a computing environment is facilitated. A query instruction is provided that details information regarding the storage location indicated in the query. It specifies whether the storage location, if protected, is protected by host-level protection or guest-level protection.
    Type: Grant
    Filed: March 20, 2008
    Date of Patent: May 8, 2012
    Assignee: International Business Machines Corporation
    Inventors: Mark S. Farrell, Lisa Cranton Heller, Damian L. Osisek, Peter K. Szwed
  • Patent number: 8176282
    Abstract: A system and method are provided for managing cache memory in a computer system. A cache controller portions a cache memory into a plurality of partitions, where each partition includes a plurality of physical cache addresses. Then, the method accepts a memory access message from the processor. The memory access message includes an address in physical memory and a domain identification (ID). A determination is made if the address in physical memory is cacheable. If cacheable, the domain ID is cross-referenced to a cache partition identified by partition bits. An index is derived from the physical memory address, and a partition index is created by combining the partition bits with the index. A processor is granted access (read or write) to an address in cache defined by partition index.
    Type: Grant
    Filed: April 6, 2009
    Date of Patent: May 8, 2012
    Assignee: Applied Micro Circuits Corporation
    Inventor: Daniel L. Bouvier
  • Patent number: 8176279
    Abstract: Management of storage used by pageable guests of a computing environment is facilitated. An enhanced suppression-on-protection facility is provided that enables the determination of which level of protection (host or guest) caused a fault condition, in response to an attempted storage access.
    Type: Grant
    Filed: March 20, 2008
    Date of Patent: May 8, 2012
    Assignee: International Business Machines Corporation
    Inventors: Mark S. Farrell, Charles W. Gainey, Jr., Dan F. Greiner, Lisa Cranton Heller, Damian L. Osisek
  • Patent number: 8176294
    Abstract: Storage expansion for a virtual machine operating system is reduced. In one embodiment, virtual machines are run on a host and accessed by remote clients over a network. When a guest operating system on one of the virtual machines deletes a file, a VM storage manager on the host detects a special write performed by the guest operating system that writes zeros into a logical block of the file. The VM storage manager links the logical block to a designated block, and de-allocates the disk block that is mapped to the logical block. The de-allocation allows the disk block to be reused by the virtual machines.
    Type: Grant
    Filed: April 6, 2009
    Date of Patent: May 8, 2012
    Assignee: Red Hat Israel, Ltd.
    Inventor: Shahar Frank
  • Publication number: 20120110236
    Abstract: The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization.
    Type: Application
    Filed: October 29, 2010
    Publication date: May 3, 2012
    Applicant: VMWARE, INC.
    Inventors: Qasim ALI, Raviprasad MUMMIDI, Vivek PANDEY, Kiran TATI
  • Publication number: 20120110237
    Abstract: A method, an apparatus, and a system for online migrating from a physical machine to a virtual machine are provided. The method including: after a target virtual machine is created, started, and suspended by a virtualization platform VMM Host, initially synchronizing data of a memory page from a source physical machine to the target virtual machine at a second time point; monitoring the operation of updating the memory page since the second time point; incrementally synchronizing data of the updated memory page in the source physical machine to the target virtual machine, and stopping monitoring when an increment value of the updated memory page in the source physical machine is less than a first threshold; and calling the virtualization platform VMM Host to resume the target virtual machine to a running state. The effect of smoothly switching services from the source physical machine to the target virtual machine is achieved.
    Type: Application
    Filed: December 30, 2011
    Publication date: May 3, 2012
    Inventors: Bin LI, Xin Zhang, Jihai Wang
  • Patent number: 8171176
    Abstract: Disclosed is a method and a SAS controller device that abstract access from one or more virtual machines operating on a host system to SAS physical devices connected to the SAS controller without a routing table for port-to-port messaging on the SAS controller. An embodiment may create a virtual expander for each physical port of the SAS controller and further create virtual ports within the virtual expanders to provide abstracted access to SAS physical devices for the virtual machines. The SAS physical devices may be replicated/cloned within the virtual ports. Each replicated/cloned SAS physical device may be assigned a unique SAS address for the SAS controller (i.e., unique for the SAS controller such that other replicates/clones on other virtual ports have a different SAS address).
    Type: Grant
    Filed: August 31, 2010
    Date of Patent: May 1, 2012
    Assignee: LSI Corporation
    Inventors: Sayantan Battacharya, Lawrence J. Rawe, Edoardo Daelli
  • Patent number: 8171504
    Abstract: A method, system and computer program product for providing driver functionality in computing system includes installing an operating system on the computing system; forming a plurality of isolated sandboxes running on the computing system under control of the operating system; during an attempt to install a driver, installing driver stub in the operating system; installing the driver in one of the isolated sandboxes, wherein the driver directly uses at least part of system resources; using a gateway between the driver stub and the installed driver to provide an interface for transmitting requests from the driver stub to driver.
    Type: Grant
    Filed: May 10, 2011
    Date of Patent: May 1, 2012
    Assignee: Parallels IP Holdings GmbH
    Inventors: Stanislav S. Protassov, Alexander G. Tormasov, Serguei M. Beloussov
  • Patent number: 8171201
    Abstract: Virtual machine optimization and/or storage reclamation solutions are disclosed that manage virtual machine sprawl and/or growing enterprise storage costs. For instance, certain solutions receive recommendations based on one or more rules, policies and/or user preferences that identify storage and/or alignment criteria for virtual machine disk (VMDK) partition(s). In certain examples, a resize tool that operates within a host operating system of a host server dynamically resizes and/or aligns one or more VMDK partitions of a powered-down virtual machine. For instance, the resize tool can be injected to the host server from a remote management server and can resize and/or align the VMDK partitions without requiring contents of the VMDK to be copied to another VMDK. By reallocating storage and/or aligning the VMDK partitions, embodiments of the invention can increase virtual machine performance and improve storage management.
    Type: Grant
    Filed: October 7, 2009
    Date of Patent: May 1, 2012
    Assignee: Vizioncore, Inc.
    Inventor: Thomas Scott Edwards, Sr.
  • Publication number: 20120102258
    Abstract: A method of dynamically reallocating memory affinity in a virtual machine after migrating the virtual machine from a source computer system to a destination computer system migrates processor states and resources used by the virtual machine from the source computer system to the destination computer system. The method maps memory of the virtual machine to processor nodes of the destination computer system. The method deletes memory mappings in processor hardware, such as translation lookaside buffers and effective-to-real address tables, for the virtual machine on the destination computer system. The method starts the virtual machine on the destination computer system in virtual real memory mode. A hypervisor running on the destination computer system receives a page fault and virtual address of a page for said virtual machine from a processor of the destination computer system and determines if the page is in local memory of the processor.
    Type: Application
    Filed: October 22, 2010
    Publication date: April 26, 2012
    Applicant: International Business Machines Corporation
    Inventors: David A. Hepkin, Peter J. Heyrman, Bret R. Olszewski
  • Patent number: 8166244
    Abstract: A removable storage device with a processor and a non-volatile memory, and a method for using a removable storage device, are provided to emulate the computer system. The storage device stores in the non-volatile memory data it obtained from a first computer system, the data containing computer applications. When the storage device is removably connected to a second computer system and the second computer system is associated with a computer peripheral device, the processor in the storage device is instructed to emulate the original process environment of the first computer system.
    Type: Grant
    Filed: March 12, 2010
    Date of Patent: April 24, 2012
    Assignee: Sandisk IL Ltd.
    Inventors: Ari Daniel Fruchter, Judah Gamliel Hahn
  • Patent number: 8161197
    Abstract: Method and system for efficient buffer management for layer 2 through layer 5 network interface controller applications are provided. Aspects of the method may comprise determining whether an active NIC connection is an L2 type, an L4 type, or an L5 type. At least one buffer descriptor may be cached locally on a network interface controller (NIC) managed by a NIC application. The buffer descriptor is associated with the determined type of the active NIC connection. If the at least one active NIC connection is of the L2 or L4 type, the buffer descriptor may comprise at least one of a receive (RX) buffer descriptor and a transmit (TX) buffer descriptor. If the NIC connection is of the L5 type, the buffer descriptor may comprise at least one of a upper translation page table (TPT) entry and a lower TPT entry.
    Type: Grant
    Filed: October 22, 2004
    Date of Patent: April 17, 2012
    Assignee: Broadcom Corporation
    Inventors: Scott McDaniel, Kan Fan
  • Publication number: 20120089764
    Abstract: Updating contents of certain memory pages in a virtual machine system is deferred until they are needed. Specifically, certain page update operations are deferred until the page is accessed for a load or store operation. Each page within the virtual machine system includes associated metadata, which includes a page signature characterizing the contents of a corresponding page or a reference to a page with canonical contents, and a flag that indicates the page needs to be updated before being accessed. The metadata may also include a flag to indicate that a backing store of the memory page has contents of a known content class. When such a memory page is mapped to a shared page with contents of that known content class, a flag in the metadata to indicate that contents of the memory page needs to be updated is not set.
    Type: Application
    Filed: October 7, 2010
    Publication date: April 12, 2012
    Applicant: VMWARE, INC.
    Inventors: Yury BASKAKOV, Alexander GARTHWAITE, Jesse POOL
  • Publication number: 20120084782
    Abstract: High availability (HA) protection is provided for an executing virtual machine. At a checkpoint in the HA process, the active server suspends the virtual machine; and the active server copies dirty memory pages to a buffer. During the suspension of the virtual machine on the active host server, dirty memory pages are copied to a ring buffer. A copy process copies the dirty pages to a first location in the buffer. At a predetermined benchmark or threshold, a transmission process can begin. The transmission process can read data out of the buffer at a second location to send to the standby host. Both the copy and transmission processes can operate substantially simultaneously on the ring buffer. As such, the ring buffer cannot overflow because the transmission process continues to empty the ring buffer as the copy process continues. This arrangement allows for smaller buffers and prevents buffer overflows.
    Type: Application
    Filed: September 30, 2010
    Publication date: April 5, 2012
    Applicant: AVAYA INC.
    Inventors: Wu Chou, Weiping Guo, Feng Liu, Zhi Qiang Zhao
  • Publication number: 20120084488
    Abstract: What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. Dynamic address translation of the virtual address proceeds. In response to a translation interruption having occurred during dynamic address translation, bits are stored in a translation exception qualifier (TXQ) field to indicate that the exception was either a host DAT exception having occurred while running a host program or a host DAT exception having occurred while running a guest program. The TXQ is further capable of indicating that the exception was associated with a host virtual address derived from a guest page frame real address or a guest segment frame absolute address. The TXQ is further capable of indicating that a larger or smaller host frame size is preferred to back a guest frame.
    Type: Application
    Filed: December 6, 2011
    Publication date: April 5, 2012
    Applicant: International Business Machines Corporation
    Inventors: Dan F. Greiner, Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer
  • Publication number: 20120084487
    Abstract: In accordance with an embodiment a method of running a virtual machine on a server includes controlling data path resources allocated to the virtual machine using a first supervisory process running on the server, controlling data path resources comprising controlling a data path of a hardware interface device coupled to the server, and controlling control path and initialization resources of the hardware interface device using a second process running on the server, where the second process is separate from the first supervisory process.
    Type: Application
    Filed: September 28, 2011
    Publication date: April 5, 2012
    Applicant: FutureWei Technologies, Inc.
    Inventor: Kaushik C. Barde
  • Patent number: 8151032
    Abstract: Described techniques increase runtime performance of workloads executing on a hypervisor by executing virtualization-aware code in an otherwise non virtualization-aware guest operating system. In one implementation, the virtualization-aware code allows workloads direct access to physical hardware devices, while allowing the system memory allocated to the workloads to be overcommitted. In one implementation, a DMA filter driver is inserted into an I/O driver stack to ensure that the target guest physical memory of a DMA transfer is resident before the transfer begins. The DMA filter driver may utilize a cache to track which pages of memory are resident. The cache may also indicate which pages of memory are in use by one or more transfers, enabling the hypervisor to avoid appropriating pages of memory during a transfer.
    Type: Grant
    Filed: September 30, 2008
    Date of Patent: April 3, 2012
    Assignee: Microsoft Corporation
    Inventor: Jacob Oshins
  • Patent number: 8151263
    Abstract: Method and systems for real-time cloning of a virtual machine are described. A virtual machine is running and a clone of the virtual machine is created while the virtual machine continues to run. In one embodiment, the creation of the clone further comprises quiesceing the virtual machine, taking a snapshot S1 (excluding main memory) of the state of the virtual machine, and creating a copy S2 of the snapshot S1. The original VM continues execution off the snapshot S1. The cloned VM restores from snapshot S2. In another embodiment, the cloning of the virtual machine further comprises instructing a vmkernel associated with the virtual machine to mark all pages of main memory of the virtual machine as copy-on-write (COW). The unique ID corresponding to the main memory is provided by the vmkernel and an association between the unique ID and the main memory is made upon restoration of the clone.
    Type: Grant
    Filed: March 19, 2007
    Date of Patent: April 3, 2012
    Assignee: VMware, Inc.
    Inventors: Ganesh Venkitachalam, Alexander Moshchuk
  • Patent number: 8151033
    Abstract: In one embodiment, a mechanism for virtual logical volume management is disclosed. In one embodiment, a method for virtual logical volume management includes writing, by a virtual machine (VM) host server computing device, a control block to each of a plurality of network-capable physical storage devices and mapping, by the VM host server computing device, physical storage blocks of the plurality of network-capable physical storage devices to virtual storage blocks of a virtual storage pool by associating the physical storage blocks with the virtual storage blocks in the control block of the network-capable physical storage device housing the physical storage blocks being mapped.
    Type: Grant
    Filed: May 28, 2009
    Date of Patent: April 3, 2012
    Assignee: Red Hat, Inc.
    Inventor: Steven Dake
  • Publication number: 20120079165
    Abstract: Paging memory from random access memory (‘RAM’) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.
    Type: Application
    Filed: September 28, 2010
    Publication date: March 29, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles J. Archer, Michael A. Blocksome, Todd A. Inglett, Joseph D. Ratterman, Brian E. Smith
  • Publication number: 20120079164
    Abstract: A processor includes a first translation look-aside buffer to support a guest operating mode. A second translation look-aside buffer supports a root operating mode. Hardware resources support the guest operating mode as controlled by guest mode control registers defining guest context. The guest context is used by the hardware resources to access the first translation look-aside buffer to translate a guest virtual address to a guest physical address. The hardware resources access the second translation look-aside buffer to translate the guest physical address to a physical address.
    Type: Application
    Filed: September 27, 2010
    Publication date: March 29, 2012
    Inventor: JAMES ROBERT HOWARD HAKEWILL
  • Patent number: 8146082
    Abstract: Virtual machines that utilize pass-through devices are migrated from a source host computer to a destination host computer. During preparation for migration, the pass-through device is substituted with an emulation handler that simulates the pass-through device experiencing errors. Upon successful migration, an error reporting signal is triggered to cause the device driver in the virtual machine to initiate a reset of the pass-through device at the destination host computer, upon which the pass-through device is mapped to the migrated virtual machine.
    Type: Grant
    Filed: March 25, 2009
    Date of Patent: March 27, 2012
    Assignee: VMware, Inc.
    Inventor: Adam M. Belay
  • Publication number: 20120072906
    Abstract: A method and system for managing direct memory access (DMA) in a computer system without a host input/output memory management unit (IOMMU). The computer system hosts virtual machines and allows memory overcommit. The computer receives, from a guest operating system that runs on a virtual machine, a request for mapping a guest address to a bus address. The computer translates the guest address to a host address and pins a memory page containing the host address to keep the memory page in host memory. The host address is then returned to the guest operating system to allow a device to use the host address as the bus address for direct memory access (DMA) to a buffer managed by the guest operating system.
    Type: Application
    Filed: September 16, 2010
    Publication date: March 22, 2012
    Applicant: RED HAT ISRAEL, LTD.
    Inventors: Michael Tsirkin, Christopher M. Wright
  • Publication number: 20120072638
    Abstract: Trapping and/or processing of read/write accesses to hardware devices represented to the host through a memory mapped space may be performed without knowledge of the processor's instruction set or semantics of the processor's instructions. A single step routine may be executed to recognize page faults occurring from read/write accesses to emulated memory pages and causing the guest to retry the operation on a single step buffer. The hypervisor may perform post-operation processing on the single step buffer after the guest retries and completes the read or write access. For example, on a read request, the single step routine may place the guest value in the single step buffer for reading by the guest on a retry operation. On a write request, the single step routine may direct the guest to retry the write operation into the single step buffer.
    Type: Application
    Filed: September 16, 2010
    Publication date: March 22, 2012
    Applicant: Unisys Corp.
    Inventors: J. Alan Grubb, John Landis, Bryan Thompson, James R. Hunter
  • Patent number: 8140735
    Abstract: Techniques for dynamic disk personalization are provided. A virtual image that is used to create an instance of a virtual machine (VM) is altered so that disk access operations are intercepted within the VM and redirected to a service that is external to the VM. The external service manages a personalized storage for a principal, the personalized storage used to personalize the virtual image without altering the virtual image.
    Type: Grant
    Filed: February 17, 2010
    Date of Patent: March 20, 2012
    Assignee: Novell, Inc.
    Inventors: Lloyd Leon Burch, Jason Allen Sabin, Kal A. Larsen, Nathaniel Brent Kranendonk, Michael John Jorgensen
  • Patent number: 8140808
    Abstract: A method of managing power in a data processing system includes monitoring a system parameter indicative of power consumption. Responsive to determining that the parameter differs from a specified threshold, a system guest, such as an operating system, is forced to release a portion of its allocated system memory. The portion of system memory released by the guest is then reclaimed by the system. The reclaimed system memory and the resulting decrease in allocated memory may enable the system to reduce system memory power consumption. The operating system may de-allocate a portion of system memory when a balloon code device driver executing under the operating system requests the operating system to allocate memory to it. The system memory allocated to the balloon device driver is then reclaimed by supervisory code such as a hypervisor.
    Type: Grant
    Filed: March 31, 2008
    Date of Patent: March 20, 2012
    Assignee: International Business Machines Corporation
    Inventor: Freeman Leigh Rawson, III
  • Patent number: 8140812
    Abstract: Techniques for placement of a virtual machine in a computing system. A first request is sent from a pool management subsystem to a placement subsystem. The first request includes specification of available storage capacities of storage systems in a computer network. The placement subsystem automatically determines a target storage system based, at least in part, on the available storage capacities. An identification of the target storage system is received at the pool management subsystem. At least one disk image of the virtual machine is written to the target storage system. Then, a second request is sent to the placement subsystem. The placement subsystem automatically determines a target computer. The latter determination is based, at least in part, on connectivity between the target computer and the target storage system. The virtual machine is installed at the target computer. The techniques facilitate live migration of virtual machines placed thereby.
    Type: Grant
    Filed: July 1, 2009
    Date of Patent: March 20, 2012
    Assignee: International Business Machines Corporation
    Inventors: Diana J. Arroyo, Steven D. Clay, Malgorzata Steinder, Ian N. Whalley, Brian L. White Eagle
  • Patent number: 8135937
    Abstract: A mechanism is provided, in a data processing system, for accessing memory based on an effective address submitted by a process of a partition. The mechanism may translate the effective address into a virtual address using a segment look-aside buffer. The mechanism may further translate the virtual address into a partition real address using a page table. Moreover, the mechanism may translate the partition real address into a system real address using a logical partition real memory map for the partition. The system real address may then be used to access the memory.
    Type: Grant
    Filed: November 17, 2008
    Date of Patent: March 13, 2012
    Assignee: International Business Machines Corporation
    Inventors: William E. Hall, Guerney D. H. Hunt, Paul A. Karger, Mark F. Mergen, David R. Safford, David C. Toll
  • Patent number: 8135899
    Abstract: A system, method and computer program product for virtualizing a processor and its memory, including a host operating system (OS); and virtualization software that maintains a virtualization environment for running a Virtual Machine (VM) without system level privileges and having a guest operating system running within the Virtual Machine. A plurality of processes are running within the host OS, each process having its own virtual memory, wherein the virtualization software is one of the processes. A host OS swap file is stored in persistent storage and maintained by the host operating system. The host OS swap file represents virtualized physical memory of the VM. A plurality of memory pages are aggregated into blocks, the blocks being stored in the host OS swap file and addressable in block form. The virtualization software manages the blocks so that blocks can be mapped to the virtualization software process virtual memory and released when the blocks are no longer necessary.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: March 13, 2012
    Assignee: Parallels IP Holdings GmbH
    Inventors: Nikolay N. Dobrovolskiy, Andrey A. Omelyanchuk, Alexey B. Koryakin, Anna L. Vorobyova, Alexander G. Tormasov, Serguei M. Beloussov
  • Patent number: 8135898
    Abstract: A method for managing memory in a nested virtualization environment is provided. The method comprises implementing a first virtual machine (VM) for a first software such that a first guest memory is allocated to the first software; maintaining a first data structure to translate one or more memory addresses in the first guest memory to corresponding memory addresses in a physical memory; maintaining a second data structure to translate one or more memory addresses in the second guest memory to corresponding memory addresses in the physical memory. The first software implements a second VM for a second software such that a second guest memory is allocated to the second software and maintains a third data structure to translate one or more memory addresses in the second guest memory to corresponding memory addresses in the first guest memory.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: March 13, 2012
    Assignee: International Business Machines Corporation
    Inventors: Shmuel Ben-Yehuda, Abel Gordon, Anthony Nicholas Liguori, Orit Luba Wasserman, Ben-Ami Yassour
  • Patent number: 8135921
    Abstract: Automated paging device management is provided for a shared memory partition data processing system. The automated approach includes managing a paging storage pool defined within one or more storage devices for holding logical memory pages external to physical memory managed by a hypervisor of the processing system. The managing includes: responsive to creation of a logical partition within the processing system, automatically defining a logical volume in the paging storage pool for use as a paging device for the new logical partition, the automatically defining occurring absent use of a filesystem, with the resultant paging device being other than a file in a filesystem; and automatically specifying the logical volume as a paging space device for the new logical partition and binding the paging space device to the new logical partition, wherein the logical volume is sized to accommodate a defined maximum memory size of the new logical partition.
    Type: Grant
    Filed: March 13, 2009
    Date of Patent: March 13, 2012
    Assignee: International Business Machines Corporation
    Inventors: Bryan M. Logan, James A. Pafumi, Steven E. Royer
  • Patent number: 8136116
    Abstract: This invention provides a storage system coupled to a computer that executes data processing jobs by running a program, comprising: an interface; a storage controller; and disk drives. The storage controller is configured to: control spinning of disk in the disk drives; receive job information which contains an execution order of the job and a load attribute of the job from the computer before the job is executed; select a logical volume to which none of the storage areas are allocated when requested by the computer to provide a logical volume for storing a file that is used temporarily by the job to be executed; select which storage area to allocate to the selected logical volume based on at least one of the job execution order and the job load attribute; allocate the selected storage area to the selected logical volume; and notify the computer of the selected logical volume.
    Type: Grant
    Filed: January 9, 2008
    Date of Patent: March 13, 2012
    Assignee: Hitachi, Ltd.
    Inventors: Masaaki Hosouchi, Nobuhiro Maki
  • Publication number: 20120059973
    Abstract: Some embodiments of the present invention include a memory management unit (MMU) configured to, in response to a write access targeting a guest page mapping of a guest virtual page number (GVPN) to a guest physical page number (GPPN) within a guest page table, identify a shadow page mapping that associates the GVPN with a physical page number (PPN). The MMU is also configured to determine whether a traced write indication is associated with the shadow page mapping and, if so, record update information identifying the targeted guest page mapping. The update information is used to reestablish coherence between the guest page mapping and the shadow page mapping. The MMU is further configured to perform the write access.
    Type: Application
    Filed: November 15, 2011
    Publication date: March 8, 2012
    Applicant: VMWARE, INC.
    Inventors: Keith ADAMS, Sahil RIHAN
  • Patent number: 8131986
    Abstract: A system and method for loading programs during a system boot using stored configuration data in a predetermined file system from a prior session and providing the stored configuration data to a guest operating system capable of communication with a host operating system, during start-up, within a computing environment having a hypervisor, in a predetermined manner.
    Type: Grant
    Filed: September 29, 2006
    Date of Patent: March 6, 2012
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Mark C. Davis, Scott E. Kelso, Ling Ma, Nathan J. Peterson, Rod D. Waltermann
  • Patent number: 8132003
    Abstract: Embodiments of apparatus, articles, methods, and systems for secure platform voucher service for software components within an execution environment are generally described herein. An embodiment includes the ability for a Virtual Machine Monitor, Operating System Monitor, or other underlying platform capability to restrict memory regions for access only by specifically authenticated, authorized and verified software components, even when part of an otherwise compromised operating system environment. A provisioning remote entity or gateway only needs to know a platform's public key or certificate hierarchy in order to receive verification proof for any component in the platform. The verification proof or voucher helps to assure to the remote entity that no man-in-the-middle, rootkit, spyware or other malware running in the platform or on the network will have access to the provisioned material.
    Type: Grant
    Filed: September 28, 2007
    Date of Patent: March 6, 2012
    Assignee: Intel Corporation
    Inventors: David Durham, Hormuzd M. Khosravi, Uri Blumenthal, Men Long
  • Patent number: 8131925
    Abstract: A computer system includes a disk space comprising at least one type of memory and an operating system for controlling allocations and access to the disk space. A runtime machine runs applications through at least one of the operating system or directly on at least one processor of the computer system. In addition, the runtime machine manages a selected runtime disk space allocated to the runtime machine by the operating system and manages a separate method cache within the selected virtual disk space. The virtual machine controls caching within the method cache of a separate result of at least one method of the application marked as cache capable. For a next instance of the method detected by the runtime machine, the runtime machine accesses the cached separate result of the method in lieu of executing the method again.
    Type: Grant
    Filed: January 24, 2011
    Date of Patent: March 6, 2012
    Assignee: International Business Machines Corporation
    Inventor: Robert R. Peterson
  • Publication number: 20120054409
    Abstract: Systems and methods that enable migration for state of an application, from a primary machine to a backup machine in platform virtualization systems. The migration employs a hybrid approach, wherein both a hypervisor, and an application itself determine states that are to migrate from the primary machine to the backup machine. Based on a direct communication between the application and the hypervisor—without assistance of local operating system—the hypervisor arranges for migration of the required states over to the backup virtual machine.
    Type: Application
    Filed: August 31, 2010
    Publication date: March 1, 2012
    Applicant: AVAYA INC.
    Inventors: Frederick P. Block, Anjur S. Krishnakumar, Parameshwaran Krishnan, Navjot Singh, Shalini Yajnik
  • Publication number: 20120054410
    Abstract: Interfaces to storage devices that employ storage space optimization technologies, such as thin provisioning, are configured to enable the benefits gained from such technologies to be sustained. Such an interface may be provided in a hypervisor of a virtualized computer system to enable the hypervisor to discover features of a logical unit number (LUN), such as whether or not the LUN is thinly provisioned, and also in a virtual machine (VM) of the virtualized computer system to enable the VM to discover features of a virtual disk, such as whether or not the virtual disk is thinly provisioned. The discovery of these features enables the hypervisor or the VM to instruct the underlying storage device to carry out certain operations such as an operation to deallocate blocks previously allocated to a logical block device, so that the storage device can continue to benefit from storage space optimization technologies implemented therein.
    Type: Application
    Filed: July 12, 2011
    Publication date: March 1, 2012
    Applicant: VMWARE, INC.
    Inventors: Satyam B. VAGHANI, Tejasvi ASWATHANARAYANA
  • Publication number: 20120054411
    Abstract: In a virtualized system using memory page sharing, a method is provided for maintaining sharing when Guest code attempts to write to the shared memory. In one embodiment, virtualization logic uses a pattern matcher to recognize and intercept page zeroing code in the Guest OS. When the page zeroing code is about to run against a page that is already zeroed, i.e., contains all zeros, and is being shared, the memory writes in the page zeroing code have no effect. The virtualization logic skips over the writes, providing an appearance that the Guest OS page zeroing code has run to completion but without performing any of the writes that would have caused a loss of page sharing. The pattern matcher can be part of a binary translator that inspects code before it executes.
    Type: Application
    Filed: August 19, 2011
    Publication date: March 1, 2012
    Applicant: VMware, Inc.
    Inventor: Ole AGESEN
  • Publication number: 20120054412
    Abstract: Optimizations are provided for frame management operations, including a clear operation and/or a set storage key operation, requested by pageable guests. The operations are performed, absent host intervention, on frames not resident in host memory. The operations may be specified in an instruction issued by the pageable guests.
    Type: Application
    Filed: November 9, 2011
    Publication date: March 1, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles W. Gainey, JR., Dan F. Greiner, Lisa Cranton Heller, Damian L. Osisek, Gustav E. Sittmann, III, Cynthia Sittmann
  • Publication number: 20120054408
    Abstract: Embodiments of systems, apparatuses, and methods for a circular buffer in a redundant virtualization environment are disclosed. In one embodiment, an apparatus includes a head indicator storage location, an outgoing tail indicator storage location, a buffer tail storage location, and fetch hardware. The head indicator, outgoing tail indicators, and buffer tail indicators are to indicate a head, outgoing tail, and buffer tail, respectively, of a circular buffer. The fetch hardware is to fetch from the head of the circular buffer and advance the head no further than the outgoing tail. The buffer tail is to be filled by software and advanced no further than the head.
    Type: Application
    Filed: August 31, 2010
    Publication date: March 1, 2012
    Inventors: Yao Zu (Eddie) Dong, Kun Tian, Yunhong (Yunhong) Jiang
  • Patent number: 8127107
    Abstract: In a computing system having virtualization software including a guest operating system (OS), a method for providing page tables that includes: providing a guest page table used by the guest OS and a shadow page table and a shadow page directory used by the virtualization software wherein: at least a portion of the guest page table and the shadow page directory are the same; and the portions that are the same are shared in computer memory by the guest page table and the shadow page directory.
    Type: Grant
    Filed: May 14, 2009
    Date of Patent: February 28, 2012
    Assignee: VMware, Inc.
    Inventors: Scott W. Devine, Lawrence S. Rogel, Prashanth P. Bungale, Gerald A. Fry
  • Publication number: 20120047313
    Abstract: A computing apparatus is described herein that includes one or more physical processors and memory, wherein the memory comprises volatile memory and non-volatile memory, and wherein contents of the non-volatile memory are made accessible to the processors directly, without going through the paging hierarchy, in a time and space multiplexed manner. The computing apparatus further includes a plurality of virtual machines executing on one or more processors, wherein the plurality of virtual machines are configured to access both the volatile memory and the non-volatile memory. A manager component manages allocation of the volatile memory and the non-volatile memory across the plurality of virtual machines during execution of the plurality of virtual machines on the processor, thereby giving the virtual machines an illusion of a larger volatile memory (DRAM) space than is actually available.
    Type: Application
    Filed: August 19, 2010
    Publication date: February 23, 2012
    Applicant: Microsoft Corporation
    Inventors: Suyash Sinha, Ajith Jayamohan
  • Publication number: 20120047312
    Abstract: A system is described herein that includes a predictor component that predicts accesses to portions of asymmetric memory pools in a computing system by a virtual machine, wherein the asymmetric memory pools comprise a first memory and a second memory, and wherein performance characteristics of the first memory are non-identical to performance of the second memory. The system also includes a memory management system that allocates portions of the first memory to the virtual machine based at least in part upon the accesses to the asymmetric memory pools predicted by the predictor component.
    Type: Application
    Filed: August 17, 2010
    Publication date: February 23, 2012
    Applicant: Microsoft Corporation
    Inventors: Ripal Babubhai Nathuji, David Tennyson Harper, III, Parag Sharma
  • Patent number: 8122224
    Abstract: An instruction is provided to perform clearing of selected address translation buffer entries (TLB entries) associated with a particular address space, such as segments of storage or regions of storage. The buffer entries related to segment table entries or region table entries or ASCE addresses. The instruction can be implemented by software emulation, hardware, firmware or some combination thereof.
    Type: Grant
    Filed: January 13, 2011
    Date of Patent: February 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Timothy J Slegel, Lisa C Heller, Erwin F Pfeffer, Kenneth E Plambeck