Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types
A system for managing storage devices, includes a plurality of nodes that implement a virtualization environment, each node of the plurality of nodes comprising a hypervisor, a service virtual machine that sits above the hypervisor, and one or more user virtual machines that sit above the hypervisor; a plurality of storage devices that are accessed by the user virtual machines via the service virtual machines, wherein a first node of the plurality of nodes comprises a first hypervisor, a first service virtual machine and a first set of one or more user virtual machines, wherein a second node of the plurality of nodes comprises a second hypervisor, a second service virtual machine and a second set of one or more user virtual machines, wherein the first hypervisor and the second hypervisor are of different types, and wherein the first virtual machine and the second service virtual machine are of the same type.
Latest NUTANIX, INC. Patents:
- Failover and failback of distributed file servers
- Dynamic allocation of compute resources at a recovery site
- Multi-cluster database management system
- SYSTEM AND METHOD FOR PROVISIONING DATABASES IN A HYPERCONVERGED INFRASTRUCTURE SYSTEM
- Apparatus and method for deploying a mobile device as a data source
This patent application is a continuation-in-part of U.S. application Ser. No. 13/207,345, entitled “ARCHITECTURE FOR MANAGING I/O AND STORAGE FOR A VIRTUALIZATION ENVIRONMENT”, filed Aug. 10, 2011, which is hereby incorporated by reference in its entirety as if fully set forth herein.
The present application is related to application Ser. No. 13/207,357, entitled “METADATA FOR MANAGING I/O AND STORAGE FOR A VIRTUALIZATION ENVIRONMENT”, application Ser. No. 13/207,365, entitled “METHOD AND SYSTEM FOR IMPLEMENTING A MAINTENANCE SERVICE FOR MANAGING I/O AND STORAGE FOR A VIRTUALIZATION ENVIRONMENT”, application Ser. No. 13/207,371, entitled “METHOD AND SYSTEM FOR IMPLEMENTING WRITABLE SNAPSHOTS IN A VIRTUALIZED STORAGE ENVIRONMENT”, and application Ser. No. 13/207,375, entitled “METHOD AND SYSTEM FOR IMPLEMENTING A FAST CONVOLUTION FOR COMPUTING APPLICATIONS”, all filed on even date herewith, and which are all hereby incorporated by reference in their entirety.
FIELDThis disclosure concerns an architecture for managing I/O and storage devices in a virtualization environment with multiple hypervisor types.
BACKGROUNDA “virtual machine” or a “VM” refers to a specific software-based implementation of a machine in a virtualization environment, in which the hardware resources of a real computer (e.g., CPU, memory, etc.) are virtualized or transformed into the underlying support for the fully functional virtual machine that can run its own operating system and applications on the underlying physical resources just like a real computer.
Virtualization works by inserting a thin layer of software directly on the computer hardware or on a host operating system. This layer of software contains a virtual machine monitor or “hypervisor” that allocates hardware resources dynamically and transparently. Multiple operating systems run concurrently on a single physical computer and share hardware resources with each other. By encapsulating an entire machine, including CPU, memory, operating system, and network devices, a virtual machine is completely compatible with most standard operating systems, applications, and device drivers. Most modern implementations allow several operating systems and applications to safely run at the same time on a single computer, with each having access to the resources it needs when it needs them.
Virtualization allows one to run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer.
One reason for the broad adoption of virtualization in modern business and computing environments is because of the resource utilization advantages provided by virtual machines. Without virtualization, if a physical machine is limited to a single dedicated operating system, then during periods of inactivity by the dedicated operating system the physical machine is not utilized to perform useful work. This is wasteful and inefficient if there are users on other physical machines which are currently waiting for computing resources. To address this problem, virtualization allows multiple VMs to share the underlying physical resources so that during periods of inactivity by one VM, other VMs can take advantage of the resource availability to process workloads. This can produce great efficiencies for the utilization of physical devices, and can result in reduced redundancies and better resource cost management.
Data Centers are often architected as diskless computers (“application servers”) that communicate with a set of networked storage appliances (“storage servers”) via a network, such as a Fiber Channel or Ethernet network. A storage server exposes volumes that are mounted by the application servers for their storage needs. If the storage server is a block-based server, it exposes a set of volumes that are also called Logical Unit Numbers (LUNs). If, on the other hand, a storage server is file-based, it exposes a set of volumes that are also called file systems. Either way, a volume is the smallest unit of administration for a storage device, e.g., a storage administrator can set policies to backup, snapshot, RAID-protect, or WAN-replicate a volume, but cannot do the same operations on a region of the LUN, or on a specific file in a file system.
Storage devices comprise one type of physical resources that can be managed and utilized in a virtualization environment. For example, VMWare is a company that provides products to implement virtualization, in which networked storage devices are managed by the VMWare virtualization software to provide the underlying storage infrastructure for the VMs in the computing environment. The VMWare approach implements a file system (VMFS) that exposes storage hardware to the VMs. The VMWare approach uses VMDK “files” to represent virtual disks that can be accessed by the VMs in the system. Effectively, a single volume can be accessed and shared among multiple VMs.
While this known approach does allow multiple VMs to perform I/O activities upon shared networked storage, there are also numerous drawbacks and inefficiencies with this approach. For example, because the VMWare approach is reliant upon the VMFS file system, administration of the storage units occurs at a too-broad level of granularity. While the virtualization administrator needs to manage VMs, the storage administrator is forced to manage coarse-grained volumes that are shared by multiple VMs. Configurations such as backup and snapshot frequencies, RAID properties, replication policies, performance and reliability guarantees etc. continue to be at a volume level, and that is problematic. Moreover, this conventional approach does not allow for certain storage-related optimizations to occur in the primary storage path.
Therefore, there is a need for an improved approach to implement I/O and storage device management in a virtualization environment.
SUMMARYEmbodiments of the present invention provide an architecture for managing I/O operations and storage devices for a virtualization environment. According to some embodiments, a system for managing storage devices, includes a plurality of nodes that implement a virtualization environment, each node of the plurality of nodes comprising a hypervisor, a service virtual machine that sits above the hypervisor, and one or more user virtual machines that sit above the hypervisor; a plurality of storage devices that are accessed by the user virtual machines via the service virtual machines, wherein a first node of the plurality of nodes comprises a first hypervisor, a first service virtual machine and a first set of one or more user virtual machines, wherein a second node of the plurality of nodes comprises a second hypervisor, a second service virtual machine and a second set of one or more user virtual machines, wherein the first hypervisor and the second hypervisor are of different types, and wherein the first virtual machine and the second service virtual machine are of the same type.
Further details of aspects, objects, and advantages of the invention are described below in the detailed description, drawings, and claims. Both the foregoing general description and the following detailed description are exemplary and explanatory, and are not intended to be limiting as to the scope of the invention.
The drawings illustrate the design and utility of embodiments of the present invention, in which similar elements are referred to by common reference numerals. In order to better appreciate the advantages and objects of embodiments of the invention, reference should be made to the accompanying drawings. However, the drawings depict only certain embodiments of the invention, and should not be taken as limiting the scope of the invention.
Embodiments of the present invention provide an improved approach to implement I/O and storage device management in a virtualization environment. According to some embodiments, a Service VM is employed to control and manage any type of storage device, including direct-attached storage in addition to network-attached and cloud-attached storage. The Service VM implements the Storage Controller logic in the user space, and with the help of other Service VMs in a cluster, virtualizes all storage hardware as one global resource pool that is high in reliability, availability, and performance. IP-based requests are used to send I/O request to the Service VMs. The Service VM can directly implement storage and I/O optimizations within the direct data access path, without the need for add-on products.
Each server 100a or 100b runs virtualization software, such as VMware ESX(i), Microsoft Hyper-V, or RedHat KVM. The virtualization software includes a hypervisor 130/132 to manage the interactions between the underlying hardware and the one or more user VMs 102a, 102b, 102c, and 102d that run client software.
A special VM 110a/110b is used to manage storage and I/O activities according to some embodiment of the invention, which is referred to herein as a “Service VM”. This is the “Storage Controller” in the currently described architecture. Multiple such storage controllers coordinate within a cluster to form a single-system. The Service VMs 110a/110b are not formed as part of specific implementations of hypervisors 130/132. Instead, the Service VMs run as virtual machines above hypervisors 130/132 on the various servers 102a and 102b, and work together to form a distributed system 110 that manages all the storage resources, including the locally attached storage 122/124, the networked storage 128, and the cloud storage 126. Since the Service VMs run above the hypervisors 130/132, this means that the current approach can be used and implemented within any virtual machine architecture, since the Service VMs of embodiments of the invention can be used in conjunction with any hypervisor from any virtualization vendor.
Each Service VM 110a-b exports one or more block devices or NFS server targets that appear as disks to the client VMs 102a-d. These disks are virtual, since they are implemented by the software running inside the Service VMs 110a-b. Thus, to the user VMs 102a-d, the Service VMs 110a-b appear to be exporting a clustered storage appliance that contains some disks. All user data (including the operating system) in the client VMs 102a-d resides on these virtual disks.
Significant performance advantages can be gained by allowing the virtualization system to access and utilize local (e.g., server-internal) storage 122 as disclosed herein. This is because I/O performance is typically much faster when performing access to local storage 122 as compared to performing access to networked storage 128 across a network 140. This faster performance for locally attached storage 122 can be increased even further by using certain types of optimized local storage devices, such as SSDs 125.
Once the virtualization system is capable of managing and accessing locally attached storage, as is the case with the present embodiment, various optimizations can then be implemented to improve system performance even further. For example, the data to be stored in the various storage devices can be analyzed and categorized to determine which specific device should optimally be used to store the items of data. Data that needs to be accessed much faster or more frequently can be identified for storage in the locally attached storage 122. On the other hand, data that does not require fast access or which is accessed infrequently can be stored in the networked storage devices 128 or in cloud storage 126.
Another advantage provided by this approach is that administration activities can be handled on a much more efficient granular level. Recall that the prior art approaches of using a legacy storage appliance in conjunction with VMFS heavily relies on what the hypervisor can do at its own layer with individual “virtual hard disk” files, effectively making all storage array capabilities meaningless. This is because the storage array manages much coarser grained volumes while the hypervisor needs to manage finer-grained virtual disks. In contrast, the present embodiment can be used to implement administrative tasks at much smaller levels of granularity, one in which the smallest unit of administration at the hypervisor matches exactly with that of the storage tier itself.
Yet another advantage of the present embodiment of the invention is that storage-related optimizations for access and storage of data can be implemented directly within the primary storage path. For example, in some embodiments of the invention, the Service VM 110a can directly perform data deduplication tasks when storing data within the storage devices. This is far advantageous to prior art approaches that require add-on vendors/products outside of the primary storage path to provide deduplication functionality for a storage system. Other examples of optimizations that can be provided by the Service VMs include quality of service (QOS) functions, encryption, and compression. The new architecture massively parallelizes storage, by placing a storage controller—in the form of a Service VM—at each hypervisor, and thus makes it possible to render enough CPU and memory resources to achieve the aforementioned optimizations.
Here, the user VM 202 structures its I/O requests into the iSCSI format. The iSCSI or NFS request 250a designates the IP address for a Service VM from which the user VM 202 desires I/O services. The iSCSI or NFS request 250a is sent from the user VM 202 to a virtual switch 252 within hypervisor 252 to be routed to the correct destination. If the request is to be intended to be handled by the Service VM 210a within the same server 200a, then the iSCSI or NFS request 250a is internally routed within server 200a to the Service VM 210a. As described in more detail below, the Service VM 210a includes structures to properly interpret and process that request 250a.
It is also possible that the iSCSI or NFS request 250a will be handled by a Service VM 210b on another server 200b. In this situation, the iSCSI or NFS request 250a will be sent by the virtual switch 252 to a real physical switch to be sent across network 240 to the other server 200b. The virtual switch 255 within the hypervisor 233 on the server 233 will then route the request 250a to the Service VM 210b for further processing.
According to some embodiments, the service VM runs the Linux operating system. As noted above, since the service VM exports a block-device or file-access interface to the user VMs, the interaction between the user VMs and the service VMs follows the iSCSI or NFS protocol, either directly or indirectly via the hypervisor's hardware emulation layer.
For easy management of the appliance, the Service VMs all have the same IP address isolated by internal VLANs (virtual LANs in the virtual switch of the hypervisor).
The second virtual NIC 261b is used to communicate with entities external to the node 200a, where the virtual NIC 261b is associated with an IP address that would be specific to Service VM 210a (and no other service VM). The second virtual NIC 261b is therefore used to allow Service VM 210a to communicate with other service VMs, such as Service VM 210b on node 200b. It is noted that Service VM 210b would likewise utilize VLANs and multiple virtual NICs 263a and 263b to implement management of the appliance.
For easy management of the appliance, the storage is divided up into abstractions that have a hierarchical relationship to each other.
Storage with similar characteristics is classified into tiers. Thus, all SSDs can be classified into a first tier and all HDDs may be classified into another tier etc. In a heterogeneous system with different kinds of HDDs, one may classify the disks into multiple HDD tiers. This action may similarly be taken for SAN and cloud storage.
The storage universe is divided up into storage pools—essentially a collection of specific storage devices. An administrator may be responsible for deciding how to divide up the storage universe into storage pools. For example, an administrator may decide to just make one storage pool with all the disks in the storage universe in that pool. However, the principal idea behind dividing up the storage universe is to provide mutual exclusion—fault isolation, performance isolation, administrative autonomy—when accessing the disk resources.
This may be one approach that can be taken to implement QoS techniques. For example, one rogue user may result in an excessive number of random JO activity on a hard disk—thus if other users are doing sequential JO, they still might get hurt by the rogue user. Enforcing exclusion (isolation) through storage pools might be used to provide hard guarantees for premium users. Another reason to use a storage pool might be to reserve some disks for later use (field replaceable units, or “FRUs”).
As noted above, the Service VM is the primary software component within the server that virtualizes I/O access to hardware resources within a storage pool according to embodiments of the invention. This approach essentially provides for a separate and dedicated controller for each and every node within a virtualized data center (a cluster of nodes that run some flavor of hypervisor virtualization software), since each node will include its own Service VM. This is in contrast to conventional storage architectures that provide for a limited number of storage controllers (e.g., four controllers) to handle the storage workload for the entire system, and hence results in significant performance bottlenecks due to the limited number of controllers. Unlike the conventional approaches, each new node will include a Service VM to share in the overall workload of the system to handle storage tasks. Therefore, the current approach is infinitely scalable, and provides a significant advantage over the conventional approaches that have a limited storage processing power. Consequently, the currently described approach creates a massively-parallel storage architecture that scales as and when hypervisor hosts are added to a datacenter.
The main entry point into the Service VM is the central controller module 304 (which is referred to here as the “I/O Director module 304”). The term I/O Director module is used to connote that fact that this component directs the I/O from the world of virtual disks to the pool of physical storage resources. In some embodiments, the I/O Director module implements the iSCSI or NFS protocol server.
A write request originating at a user VM would be sent to the iSCSI or NFS target inside the service VM's kernel. This write would be intercepted by the I/O Director module 304 running in user space. I/O Director module 304 interprets the iSCSI LUN or the NFS file destination and converts the request into an internal “vDisk” request (e.g., as described in more detail below). Ultimately, the I/O Director module 304 would write the data to the physical storage. I/O Director module 304 is described in more detail below in conjunction with the description of
Each vDisk managed by a Service VM corresponds to a virtual address space forming the individual bytes exposed as a disk to user VMs. Thus, if the vDisk is of size 1 TB, the corresponding address space maintained by the invention is 1 TB. This address space is broken up into equal sized units called vDisk blocks. Metadata 310 is maintained by the Service VM to track and handle the vDisks and the data and storage objects in the system that pertain to the vDisks. The Metadata 310 is used to track and maintain the contents of the vDisks and vDisk blocks.
In order to determine where to write and read data from the storage pool, the I/O Director module 304 communicates with a Distributed Metadata Service module 430 that maintains all the metadata 310. In some embodiments, the Distributed Metadata Service module 430 is a highly available, fault-tolerant distributed service that runs on all the Service VMs in the appliance. The metadata managed by Distributed Metadata Service module 430 is itself kept on the persistent storage attached to the appliance. According to some embodiments of the invention, the Distributed Metadata Service module 430 may be implemented on SSD storage.
Since requests to the Distributed Metadata Service module 430 may be random in nature, SSDs can be used on each server node to maintain the metadata for the Distributed Metadata Service module 430. The Distributed Metadata Service module 430 stores the metadata that helps locate the actual content of each vDisk block. If no information is found in Distributed Metadata Service module 430 corresponding to a vDisk block, then that vDisk block is assumed to be filled with zeros. The data in each vDisk block is physically stored on disk in contiguous units called extents. Extents may vary in size when de-duplication is being used. Otherwise, an extent size coincides with a vDisk block. Several extents are grouped together into a unit called an extent group. An extent group is then stored as a file on disk. The size of each extent group is anywhere from 16 MB to 64 MB. In some embodiments, an extent group is the unit of recovery, replication, and many other storage functions within the system.
Further details regarding methods and mechanisms for implementing Metadata 310 are described below and in co-pending application Ser. No. 13/207,357, which is hereby incorporated by reference in its entirety.
A health management module 308 (which may hereinafter be referred to as a “Curator”) is employed to address and cure any inconsistencies that may occur with the Metadata 310. The Curator 308 oversees the overall state of the virtual storage system, and takes actions as necessary to manage the health and efficient performance of that system. According to some embodiments of the invention, the curator 308 operates on a distributed basis to manage and perform these functions, where a master curator on a first server node manages the workload that is performed by multiple slave curators on other server nodes. MapReduce operations are performed to implement the curator workload, where the master curator may periodically coordinate scans of the metadata in the system to manage the health of the distributed storage system. Further details regarding methods and mechanisms for implementing Curator 308 are disclosed in co-pending application Ser. No. 13/207,365, which is hereby incorporated by reference in its entirety.
Some of the Service VMs also includes a Distributed Configuration Database module 306 to handle certain administrative tasks. The primary tasks performed by the Distributed Configuration Database module 306 are to maintain configuration data 312 for the Service VM and act as a notification service for all events in the distributed system. Examples of configuration data 312 include, for example, (1) the identity and existence of vDisks; (2) the identity of Service VMs in the system; (3) the physical nodes in the system; and (4) the physical storage devices in the system. For example, assume that there is a desire to add a new physical disk to the storage pool. The Distributed Configuration Database module 306 would be informed of the new physical disk, after which the configuration data 312 is updated to reflect this information so that all other entities in the system can then be made aware for the new physical disk. In a similar way, the addition/deletion of vDisks, VMs and nodes would handled by the Distributed Configuration Database module 306 to update the configuration data 312 so that other entities in the system can be made aware of these configuration changes.
Another task that is handled by the Distributed Configuration Database module 306 is to maintain health information for entities in the system, such as the Service VMs. If a Service VM fails or otherwise becomes unavailable, then this module tracks this health information so that any management tasks required of that failed Service VM can be migrated to another Service VM.
The Distributed Configuration Database module 306 also handles elections and consensus management within the system. Another task handled by the Distributed Configuration Database module is to implement ID creation. Unique IDs are generated by the Distributed Configuration Database module as needed for any required objects in the system, e.g., for vDisks, Service VMs, extent groups, etc. In some embodiments, the IDs generated are 64-bit IDs, although any suitable type of IDs can be generated as appropriate for embodiment so the invention. According to some embodiments of the invention, the Distributed Configuration Database module 306 may be implemented on an SSD storage because of the real-time guarantees required to monitor health events.
If the I/O request is intended to write to a vDisk, then the Admission Control module 404 determines whether the Service VM is the owner and/or authorized to write to the particular vDisk identified in the I/O request. In some embodiments, a “shared nothing” architecture is implemented such that only the specific Service VM that is listed as the owner of the vDisk is permitted to write to that vDisk. This ownership information may be maintained by Distributed Configuration Database module 306.
If the Service VM is not the owner, The Distributed Configuration Database module 306 is consulted to determine the owner. The owner is then asked to relinquish ownership so that the current Service VM can then perform the requested I/O operation. If the Service VM is the owner, then the requested operation can be immediately processed.
Admission Control 404 can also be used to implement I/O optimizations as well. For example, Quality of Service (QoS) optimizations can be implemented using the Admission Control 404. For many reasons, it is desirable to have a storage management system that is capable of managing and implementing QoS guarantees. This is because many computing and business organizations must be able to guarantee a certain level of service in order to effectively implement a shared computing structure, e.g., to satisfy the contractual obligations of service level agreements.
When the I/O Request 502 is received by a request analyzer 504 in Admission Control 404, the identify and/or type of request/requester is checked to see if the I/O request 502 should be handled in any particular way to satisfy the QoS parameters. If the I/O request 502 is a high priority request, then it is added to the high priority queue 506. If the I/O request 502 is a low priority request, then it is added to the low priority queue 508.
Returning back to
Embodiments of the invention can be used to directly implement de-duplication when implementing I/O in a virtualization environment. De-duplication refers to the process of making sure that a specific data item is not excessively duplicated multiple times within a storage system. Even if there are multiple users or entities that separately perform operations to store the same data item, the de-duplication process will operate to store only a limited number of copies of the data item, but allow those multiple users/entities to jointly access the copies that are actually stored within the storage system.
In some embodiments, de-duplication is performed directly on primary storage using the virtualized storage management system. The container abstraction can be used to specify a de-duplication domain, where de-duplication is performed for data stored within the container. Data in different containers is not de-duplicated even if it is the same. A container is assigned one storage pool—this defines the disks where the data for that container will be stored. A container supports several configuration parameters that determine how the data on that container is treated, including for example some or all of the following:
1. Replication factor: Data in a container is replicated based on this replication factor. Replicas are placed on different servers whenever possible.
2. Reed Solomon parameters: While all data is written initially based on the specified replication factor, it may be converted later to use Reed Solomon encoding to further save on storage capacity. The data contraction policy on the vDisks enforces when the data is converted to use Reed Solomon encoding.
3. Encryption type: Data in a container is encrypted based on the specified encryption policy if any. It is noted that there are also other encoding schemes which can be utilized as well.
4. Compression type: Data in a container is compressed based on the given compression type. However, when to compress is a policy that's specified on individual vDisks assigned to a container. That is, compression may be done inline, or it may be done offline.
5. Max capacity: This parameter specifies the max total disk capacity to be used in each tier in the assigned storage pools.
6. Min reserved capacity (specified for each tier): This parameter can also be specified for each tier in the assigned storage pools. It reserves a certain amount of disk space on each tier for this container. This ensures that that disk space would be available for use for this container irrespective of the usage by other containers.
7. Min total reserved capacity: This is the minimum reserved across all tiers. This value should be greater than or equal to the sum of the min reserved capacity per tier values.
8. Max de-duplication extent size: The Rabin fingerprinting algorithm breaks up a contiguous space of data into variable sized extents for the purpose of de-duplication. This parameter determines the max size of such extents.
9. Stripe width: To get high disk bandwidth, it is important to stripe data over several disks. The stripe width dictates the number of extents corresponding to a contiguous vDisk address space that'll be put in a single extent group.
10. Tier ordering: All tiers in the assigned storage pools are ordered relative to each other. Hot data is placed in the tier highest up in the order and migrated to other tiers later based on the ILM (Information Lifecycle Management or “data waterfalling”) policy. A different tier ordering may be specified for random JO as opposed to sequential JO. Thus, one may want to migrate data to the SSD tier only for random JO and not for sequential JO.
11. ILM policy: The ILM policy dictates when data is migrated from one tier to the tier next in the tier ordering. For example, this migration may start when a given tier is more than 90% full or when the data on that tier is more than X days old.
vDisks are the virtual storage devices that are exported to user VMs by the Service VMs. As previously discussed, the vDisk is a software abstraction that manages an address space of S bytes where S is the size of the block device. Each service VM might export multiple vDisks. A user VM might access several vDisks. Typically, all the vDisks exported by a service VM are accessed only by the user VMs running on that server node. This means that all iSCSI or NFS requests originating from a user VM can stay local to the hypervisor host—going from the user VM to the hypervisor SCSI emulation layer to a virtual switch to the Service VM. A vDisk is assigned a unique container at creation time. The data in the vDisk is thus managed according to the configuration parameters set on the container. Some additional configuration parameters are specified on the vDisk itself, including some or all of the following:
1. De-duplication: This specifies whether de-duplication is to be used for this vDisk. However, when de-duplication is used is determined by the data contraction policy.
2. Data contraction policy: The data contraction policy controls when de-duplication, compression, and Reed-Solomon encoding is applied (if any of them are specified). De-duplication and compression may be applied in-line to a primary storage path or out-of-line. If out-of-line, the data contraction policy specifies the time when deduplication/compression are applied (e.g., X days). Reed-Solomon encoding should be applied offline. The data contraction policy may specify a different time for doing Reed-Solomon than for deduplication/compression. Note that if both deduplication and compression are specified, then data would be de-duplicated and compressed at the same time before writing to disk.
3. Min total reserved capacity: This is the minimum reserved capacity for this vDisk across all the storage tiers. The sum of all minimum total reserved capacity parameters for the vDisks in a container should be less than or equal to the minimum total reserved capacity set on the container.
4. vDisk block size: The vDisk address space is divided into equal sized blocks. It should be less than or equal to the stripe width parameter on the container. A relatively large vDisk block size (e.g., 128 KB) helps reduce the metadata that is maintained.
5. vDisk row blocks: The metadata of a vDisk are conceptually divided into rows. Each row is hash-partitioned onto one metadata server residing in some Service VM in this distributed system. This parameter controls how many blocks of this vDisk are in one row.
6. vDisk Capacity: This is the size (in bytes) of the vDisk address space. This effectively controls the size of disk that an external user VM sees.
7. QoS parameters: Each vDisk may specify a priority and a fair share. Competing IO requests from various vDisks shall be scheduled based on this priority and fair share.
In some embodiments of the invention, the basic unit of de-duplication is the extent, which is a contiguous portion of storage on a given storage device. Multiple extents can be collected together and stored within an “extent group.”
The left portion of
Assume that a user issues an I/O request to write an item of data 700 to storage. The service VM 740 will perform a process to analyze the data item 700 and assign that data item 700 to an extent for storage. At 720, a determination is made whether de-duplication is desired or enabled. If not, then at 728, a new non-de-duplication extent 704 is created within an appropriate extent group 750b to store the data item 700.
If de-duplication is enabled, then a further determination is made at 722 whether the storage system already includes a copy of that data item. According to some embodiments, this is accomplished by performing “Rabin fingerprinting” upon the data that is being stored. Rabin fingerprinting is a known algorithm for objectively dividing data into consistent portions. This algorithm creates uniform and common boundaries for data portions that are partitioned out of larger items of data. Further details regarding an exemplary approach that can be taken to identify extents for de-duplication are described in co-pending application Ser. No. 13/207,375, which is hereby incorporated by reference in its entirety. The SHA1 algorithm is applied to the data portion created by Rabin fingerprinting to create a unique signature for that data portion. This is a well-known hashing algorithm that takes any set of arbitrary data and creates a 20 byte content-based signature. The SHA1 algorithm creates a value that is used as an extent identifier (extent ID), which is further used to determine if an earlier copy of the data item 700 has already been stored in the storage system.
If a copy already exists, then a new copy of the data item 700 is not stored; instead, the existing copy stored in de-dup extent 702b is used. A “ref_count” (or reference count) for that extent 702b would be incremented to provide notice that a new entity is now relying upon this extent 702b to store the data item 700. However, if a copy of the data item 200 does not yet exist, then a new extent 702c is created to store the data item 700.
The sizes of the extents and extent groups for the invention can be chosen to suit any desired performance goals. In some embodiments, the extent groups are implemented as 64 Mbyte size files. The non-deduplicated extents are created to have a much larger size than the deduplicated extents. For example, the non-deduplicated extents may be implemented with 1 Mbyte sizes and the deduplicated extents implemented with 8 Kbyte sizes. The goal of this sizing strategy is to make the deduplicated extents as small as practical to facilitate duplications while the non-deduplicated extents are made as large as practical to facilitate efficient physical I/O operations and to prevent the metadata (e.g., the number of rows of metadata) from bloating.
As noted above, metadata is maintained by the set of Service VMs to track and handle the data and storage objects in the system. Each vDisk corresponds to a virtual address space forming the individual bytes exposed as a disk to user VMs. As illustrated in
The vDisk map expects the I/O request to identify a specific vDisk and an offset within that vDisk. In the present embodiment, the unit of storage is the block, whereas the unit of deduplication is the extent. Therefore, the vDisk map is basically assuming the unit of storage specified by the offset information is to a block, and then identifying the corresponding extent ID from that block, where the extent offset can be derived for within the block.
The discretization into vDisk blocks helps store this information in a table in the vDisk map. Thus, given any random offset within the vDisk, one can discretize it using mod-arithmetic to obtain the corresponding vDisk block boundary. A lookup can be performed in the vDisk map for that (vDisk, vDisk block) combination. The information in each vDisk block is stored as a separate column in the table. A collection of vDisk blocks might be chosen to be stored in a single row—this guarantees atomic updates to that portion of the table. A table can be maintained for the address space of each vDisk. Each row of this table contains the metadata for a number of vDisk blocks. Each column corresponds to one vDisk block. The contents of the column contain a number of extent IDs and the offset at which they start in the vDisk block.
As noted above, a collection of extents is put together into an extent group, which is stored as a file on the physical disks. Within the extent group, the data of each of the extents is placed contiguously along with the data's checksums (e.g., for integrity checks). Each extent group is assigned a unique ID (e.g., 8 byte ID) that is unique to a container. This id is referred to as the extent group ID.
The extent ID map essentially maps an extent to the extent group that it is contained in. The extent ID map forms a separate table within the metadata—one for each container. The name of the table contains the id of the container itself. The lookup key of this table is the canonical representation of an extent ID. In some embodiments, this is either a 16 byte combination containing (vDiskID, Offset) for non-deduplicated extents, or a 24 byte representation containing (extent size, SHA1 hash) for de-duplicated extents. The corresponding row in the table just contains one column—this column contains the extent Group ID where the corresponding extent is contained.
When updates are made to a vDisk address space, the existing extent there is replaced by another (in case of de-duplication and/or for certain types of copy on write operations for snapshots). Thus the old extent may get orphaned (when it is no longer referred to by any other vDisk in that container). Such extents will ultimately be garbage collected. However, one possible approach is to aggressively reclaim disk space that frees up. Thus, a “ref_count” value can be associated with each extent. When this ref_count drops to 0, then it can be certain that there are no other vDisks that refer this extent and therefore this extent can immediately be deleted. The ref_count on a deduplicated extent may be greater than one when multiple vDisks refer to it. In addition, this may also occur when the same extent is referred to by different parts of the address space of the same vDisk. The ref_count on an extent is stored inside the metadata for the extent group in the extent Group ID map rather than in the extent ID map. This enables batch updates to be made to several extents and to allow updates to a single extent Group ID metadata entry. The ref_count on a non-deduplicated extent may be greater than one when multiple snapshots of a vDisk refer to that extent. One possible approach for implementing snapshots in conjunction with the present invention is described in co-pending U.S. Ser. No. 13/207,371, filed on even date herewith, which is incorporated by reference in its entirety.
To reduce the number of lookups by the Distributed Metadata Service module, an optimization can be made for the case of non-deduplicated extents that have a ref_count of one and are owned solely by the vDisk in question. In such a case, the extent ID map does not have an entry for such extents. Instead, the extent Group ID that they belong to is put in the vDisk address space map itself in the same entry where information about the corresponding vDisk block is put. This way, the # of metadata lookups goes down by 1.
The extent Group ID map provides a mapping from a extent Group ID to the location of the replicas of that extent Group ID and also their current state. This map is maintained as a separate table per container, and is looked up with the extent Group ID as the key. The corresponding row in the table contains as many columns as the number of replicas. Each column is referenced by the unique global disk ID corresponding to the disk where that replica is placed. In some embodiments, disk IDs in the server/appliance are assigned once when the disks are prepared. After that, the disk ids are never changed. New or re-formatted disks are always given a new disk ID. The mapping from disk IDs to the servers where they reside is maintained in memory and is periodically refreshed.
An extra column can also be provided for the vDisk ID that created this extent group. This is used to enforce the property that only one vDisk ever writes to an extent group. Thus, there is never a race where multiple vDisks are trying to update the same extent group.
In some embodiments, for each replica, the following information is maintained:
-
- a. The diskID where the replica resides.
- b. A Version number.
- c. A Latest Intent Sequence number. This is used for maintaining metadata consistency and is explained later in the subsequent sections.
- d. The extent ids of each of the extents contained in the extent group. This is either the 8 byte offset for non-deduplicated extents, or 24 bytes (size, SHA1) for deduplicated extents. For each extent, the offset in the extentGroupID file is also contained here. Additionally a 4 byte reference count is also stored for each extent. Finally, an overall checksum is stored for each extent. This checksum is written after a write finishes and is primarily used to verify the integrity of the extent group data.
- e. Information about all the tentative updates outstanding on the replica. Each tentative update carries an Intent Sequence number. It also carries the tentative version that the replica will move to if the update succeeds.
If multiple replicas share the same information, then that information will not be duplicated across the replicas. This cuts down unnecessary metadata bloat in the common case when all the replicas are the same.
At any time, multiple components in the appliance may be accessing and modifying the same metadata. Moreover, multiple related pieces of the metadata might need to be modified together. While these needs can be addressed by using a centralized lock manager and transactions, there are significant performance reasons not to use these lock-based approaches. One reason is because this type of central locking negatively affects performance since all access to metadata would need to go through the centralized lock manager. In addition, the lock manager itself would need to be made fault tolerant, which significantly complicates the design and also hurts performance. Moreover, when a component that holds a lock dies, recovering that lock becomes non-trivial. One may use a timeout, but this results in unnecessary delays and also timing related races.
Therefore, the advanced metadata described above provides an approach that utilizes lock-free synchronization, coupled with careful sequencing of operations to maintain the consistency of the metadata. The main idea is that the order in which the metadata of
With regard to the three metadata maps 802, 804, and 806 shown in
The reason this works is because any dangling or inconsistent references caused by a failure of the write operations in the bottom-up direction should not result in any detectable inconsistencies for the read operations that work in the top-down direction. This is because each layer of the metadata builds upon each other so that in the top-down direction, an extent ID identified from the vDisk map 802 should have a corresponding entry in the next level extent ID map 804, which in turn is used to identify an extent group ID which itself should have a corresponding entry in the extent group ID map 806.
To explain, consider first the opposite situation in which an update/write operation to the metadata is made in same direction as the read operations (i.e., in the top-down direction). Assume that the write operation successively creates an extent ID entry in the vDisk map 802, but dies before it is able to complete the operation and therefore never has the opportunity to create an entry in the extent ID map 804 that maps the extent ID to an extent group ID. In this situation, a subsequent read operation may possibly read that extent ID from the vDisk map 802, but will encounter a dangling/inconsistent reference because that extent ID does not map to anything in the extent ID map 804.
Now, consider if the update/write operation to the metadata is made in the bottom-up direction. Assume that the write operation successively creates a mapping between the extent ID and an extent group ID in the extent ID map 804. Further assume that the operation dies before it is able to finish, and therefore never has the opportunity to create an entry in the vDisk map 802 for the extent ID. This situation also creates a dangling reference in the extent ID map 804. However, unlike the previous scenario, a subsequent read operation will never reach the dangling reference in the extent ID map 304 because it has to first access the vDisk map 802, and since the previous operation did not reach this map, there is no reference to the new extent ID in the vDisk map 802. Therefore, the subsequent read should not be able to find a path to reach the dangling reference in the extent ID map. In this way, the present approach inherently maintains the integrity of the metadata without needing to provide any central locking schemes for that metadata.
The vDisks can either be unshared (read and written by a single user VM) or shared (accessed by multiple user VMs or hypervisors) according to embodiments of the invention.
For I/O requests 950b from a user VM 902b that resides on the same server node 900b, the process to handle the I/O requests 950b is straightforward, and is conducted as described above. Essentially, the I/O request is in the form of an iSCSI or NFS request that is directed to a given IP address. The IP address for the I/O request is common for all the Service VM on the different server nodes, but VLANs allows the IP address of the iSCSI or NFS request to be private to a particular (local) subnet, and hence the I/O request 950b will be sent to the local Service VM 910b to handle the I/O request 950b. Since local Service VM 910b recognizes that it is the owner of the vDisk 923 which is the subject of the I/O request 950b, the local Service VM 910b will directly handle the I/O request 950b.
Consider the situation if a user VM 902a on a server node 900a issues an I/O request 950a for the shared vDisk 923, where the shared vDisk 923 is owned by a Service VM 910b on a different server node 900b. Here, the I/O request 950a is sent as described above from the user VM 902a to its local Service VM 910a. However, the Service VM 910a will recognize that it is not the owner of the shared vDisk 923. Instead, the Service VM 910a will recognize that Service VM 910b is the owner of the shared vDisk 923. In this situation, the I/O request will be forwarded from Service VM 910a to Service VM 910b so that the owner (Service VM 910b) can handle the forwarded I/O request. To the extent a reply is needed, the reply would be sent to the Service VM 910a to be forwarded to the user VM 902a that had originated the I/O request 950a.
In some embodiments, an IP table 902 (e.g., a network address table or “NAT”) is maintained inside the Service VM 910a. The IP table 902 is maintained to include the address of the remote Server VMs. When the local Service VM 910a recognizes that the I/O request needs to be sent to another Service VM 910b, the IP table 902 is used to look up the address of the destination Service VM 910b. This “NATing” action is performed at the network layers of the OS stack at the Service VM 910a, when the local Service VM 910a decides to forward the IP packet to the destination Service VM 910b.
Each un-shared vDisk is owned by the Service VM that is local to the user VM which accesses that vDisk on the shared-nothing basis. In the current example, vDisk 1023a is owned by Service VM 1010a since this Service VM is on the same server node 1000a as the user VM 1002a that accesses this vDisk. Similarly, vDisk 1023b is owned by Service VM 1010b since this Service VM is on the same server node 1000b as the user VM 1002b that accesses this vDisk.
I/O requests 1050a that originate user VM 1002a would therefore be handled by its local Service VM 1023a on the same server node 1000a. Similarly, I/O requests 1050b that originate user VM 1002b would therefore be handled by its local Service VM 1023b on the same server node 1000b. This is implemented using the same approach previously described above, in which the I/O request in the form of an iSCSI or NFS request is directed to a given IP address, and where VLANs allows the IP address of the iSCSI or NFS request to be private to a particular (local) subnet where the I/O request 950b will be sent to the local Service VM to handle the I/O request. Since local Service VM recognizes that it is the owner of the vDisk which is the subject of the I/O request, the local Service VM will directly handle the I/O request.
It is possible that a user VM will move or migrate from one node to another node. Various virtualization vendors have implemented virtualization software that allows for such movement by user VMs. For shared vDisks, this situation does not necessarily affect the configuration of the storage system, since the I/O requests will be routed to the owner Service VM of the shared vDisk regardless of the location of the user VM. However, for unshared vDisks, movement of the user VMs could present a problem since the I/O requests are handled by the local Service VMs.
A determination is made at 1104 whether the Service VM is the owner of the un-shared vDisk. If the Service VM is not the owner of the vDisk, this means the user VM which issued the I/O request must have just recently migrated to the node on which the Service VM resides. However, if the Service VM is the owner, this means that the user VM has not recently migrated from another node to the current node, since the Service VM is already registered as the owner of that un-shared vDisk, e.g., due to a previous I/O request that had already been handled by the Service VM.
If the local Service VM is not the owner of the un-shared vDisk, then at 1106, the Service VM will become the owner of that vDisk. This action is performed by contacting the registered owner Service VM of the vDisk (known via The Distributed Configuration Database module), and asking that owner to relinquish ownership of the vDisk. This new ownership information can then be recorded with the central metadata manager.
Once the local Service VM has acquired ownership of the vDisk, then the I/O request can be locally handled by that Service VM at 1108. If the ownership check at 1104 had determined that the Service VM was already the owner, then 1106 would not need to be performed, and the flow would have proceeded directly to 1108.
Assume that user VM 1202 now decides to issue an I/O request for vDisk 1223a. This situation is illustrated in
To address this situation, an ownership change will occur for the vDisk 1223a. As illustrated in
Other possible situations may arise that result in the need to transfer ownership of a vDisk from one Service VM to another Service VM. For example, consider if the Service VM that is the owner of a shared vDisk (or the server node that hosts the Service VM) undergoes a failure. In this situation, a new Service VM will need to take over as the owner of the vDisk to handle ongoing I/O request for that vDisk.
At 1304, a candidate owner is identified for the vDisk. In some embodiments, this action can be handled using a leadership election process to identify the owner of the vDisk. This election process works by having the different Service VMs “volunteer” to the owner of a vDisk, where one Service VM is actually selected as the owner while the other volunteers are placed on a list as back-up owners. If the actual owner fails, then the next volunteer from the list of backup owners is selected as the new owner. If that selected new owner is not available, then subsequent next candidate(s) are selected from the list until a suitable candidate is identified, e.g., a Service VM that is alive is available to suitably serve as the owner of the vDisk.
At 1306, the candidate owner will obtain ownership of the vDisk. In some embodiments, this action is performed modifying the metadata in the storage system to publish the fact that the candidate Service VM is now the new owner of the vDisk. Thereafter, at 1308, the new owner Service VM will handle subsequent I/O request for that vDisk.
The architecture for implementing storage management in a virtualization environment as described above allows for instantiations of service virtual machines to provide access to storage devices for user virtual machines regardless of the hypervisor type residing on its corresponding node. An instantiation of a service virtual machine may support any type of hypervisor. This will be described in greater detail below with reference to
The hypervisor 130 residing at the first node 100A and the hypervisor 132 residing at the second node 100B may be of different types, while the instantiations of the service VMs 110a/110b at the first and second node 100A/100B are of the same type. In this way, a single service VM type may support hypervisors of different types rather than having to design a different service VM type for every type of hypervisor.
In some embodiments the hypervisor 130 residing at the first node 100A may be a VMware type, ESX(i) type, Microsoft Hyper-V type, or RedHat KVM type hypervisor. In some other embodiments, the hypervisor 132 residing at the second node 100B may also be a VMware type, ESX(i) type, Microsoft Hyper-V type, or RedHat KVM type hypervisor.
One mechanism for allowing a single service VM type to support various different hypervisor types is to implement a service VM with a hypervisor interface.
The instantiation of the Service VM 1510 includes a set of libraries 1503 and a hypervisor interface 1501. The set of libraries 1503 include various data to facilitate communication between a Service VM 1510 and a hypervisor 1533. In some embodiments the set of libraries 1503 are arranged by hypervisor type such that data associated with a library corresponding to a particular hypervisor type is utilized to communicate with that particular hypervisor type. In some embodiments, the data associated with a particular hypervisor type defines a set of operations that need to be implemented for that particular hypervisor type.
The hypervisor interface 1501 of the instantiation of the Service VM 1510 configures the Service VM 1510 to communicate with the hypervisor 1533.
Initially, an instantiation of a Service VM is initialized for a node of the storage management virtualization environment during boot-up as shown at 1601. The Service VM initialized above the hypervisor on the same node as the hypervisor.
After the instantiation of the Service VM is initialized, the hypervisor interface of the Service VM discovers the hypervisor type as shown at 1603. The hypervisor interface may discover the hypervisor type dynamically and may do so by attempting communication with the hypervisor. For example, the hypervisor interface may attempt to communicate with the hypervisor using various different hypervisor languages until a match is found. As another example, a hypervisor interface may communicate a universal command to the hypervisor and determine the hypervisor type based on the hypervisor response. One ordinarily skilled in the art will recognize that numerous different mechanisms are available for discovering the hypervisor type.
Once the hypervisor interface has discovered the hypervisor type, the hypervisor interface chooses a library of commands to be used by the service VM for communicating with the hypervisor based at least in part on the discovered hypervisor type as shown at 1605. For example, if the hypervisor interface discovers that the hypervisor is of a VMware type, the hypervisor interface may choose a library of VMware type commands to be used by the service VM for communicating with the hypervisor. Alternatively, if the hypervisor interface discovers that the hypervisor is of a ESX(i) type, the hypervisor interface may choose a library of ESX(i) type commands to be used by the service VM for communicating with the hypervisor.
The service VM may then obtain hardware, performance and configuration information associated with the storage management virtualization environment from the hypervisor using the library of commands chosen as shown at 1607. For example, if a library of VMware type commands is chosen for the service VM, then the service VM may use those VMware type commands to communicate with the hypervisor to obtain hardware and configuration information associated with the storage management virtualization environment. Alternatively, if a library of ESX(i) type commands are chosen for the service VM, then the service VM may use those ESX(i) type commands to communicate with the hypervisor to obtain hardware and configuration information associated with the storage management virtualization environment.
In some embodiments, the hardware, performance and configuration information obtained from the hypervisor may include information regarding the user virtual machines and service virtual machines running on the system. For example, the hypervisor may provide information indicating the location of user VMs and service VMs (e.g., what nodes the user VMs or service VMs sit on) as well as information for facilitating communication between user VMs and service VMs (e.g., addresses).
In some other embodiments, the hardware, performance and configuration information obtained from the hypervisor may include performance statistics associated with the storage management virtualization environment. Performance statistics may include information such as which Service VMs have been servicing which user VMs, number of I/O requests serviced by Service VMs, time spent servicing I/O requests, etc. These performance statistics may be utilized by the Service VM to provide more optimized or efficient storage management to user VMs.
In some other embodiments, the hardware, performance and configuration information obtained from the hypervisor may include configuration information associated with the plurality of storage devices. For example, the configuration information may indicate which storage devices are local storage devices and which storage devices are networked storage devices. As another example, the configuration information may indicate the amount of storage space available for the storage devices. This configuration information may be utilized by the Service VM to provide more optimized or efficient storage management to user VMs.
After the service VM initially obtains hardware, performance and configuration information associated with the storage management virtualization environment, the service VM may continue to utilize the hypervisor interface to monitor the storage management virtualization environment, perform user VM initiated actions, and to provide solutions in case of service VM failure.
Thus, a single service VM type may be initialized as multiple service VM instantiations on nodes with different hypervisor types and those multiple service VM instantiations may be individually configured to support their corresponding hypervisor type. In this way, a single service VM type may support hypervisors of different types rather than having to design a different service VM type for every type of hypervisor.
Therefore, what has been described is an improved architecture for implementing I/O and storage device management in a virtualization environment. According to some embodiments, a Service VM is employed to control and manage any type of storage device, including directly attached storage in addition to networked and cloud storage. The Service VM has an entire Storage Controller implemented in the user space, and can be migrated as needed from one node to another. IP-based requests are used to send I/O request to the Service VMs. The Service VM can directly implement storage and I/O optimizations within the direct data access path, without the need for add-on products.
System Architecture
According to one embodiment of the invention, computer system 1400 performs specific operations by processor 1407 executing one or more sequences of one or more instructions contained in system memory 1408. Such instructions may be read into system memory 1408 from another computer readable/usable medium, such as static storage device 1409 or disk drive 1410. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and/or software. In one embodiment, the term “logic” shall mean any combination of software or hardware that is used to implement all or part of the invention.
The term “computer readable medium” or “computer usable medium” as used herein refers to any medium that participates in providing instructions to processor 1407 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as disk drive 1410. Volatile media includes dynamic memory, such as system memory 1408.
Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
In an embodiment of the invention, execution of the sequences of instructions to practice the invention is performed by a single computer system 1400. According to other embodiments of the invention, two or more computer systems 1400 coupled by communication link 1415 (e.g., LAN, PTSN, or wireless network) may perform the sequence of instructions required to practice the invention in coordination with one another.
Computer system 1400 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 1415 and communication interface 1414. Received program code may be executed by processor 1407 as it is received, and/or stored in disk drive 1410, or other non-volatile storage for later execution.
In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, the above-described process flows are described with reference to a particular ordering of process actions. However, the ordering of many of the described process actions may be changed without affecting the scope or operation of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.
Claims
1. A system comprising at least a processor for managing storage devices, comprising:
- a plurality of nodes that implement a virtualization environment, a first node of the plurality of nodes comprising a first hypervisor, a first service virtual machine and a first set of one or more user virtual machines, a second node of the plurality of nodes comprises a second hypervisor, a second service virtual machine and a second set of one or more user virtual machines,
- wherein the first hypervisor is of a first hypervisor type and the second hypervisor is of a second hypervisor type, the first hypervisor type and the second hypervisor type being different hypervisor types that use different sets of commands to operate; and
- a plurality of storage devices that are accessed by the user virtual machines via the first service virtual machine and the second service virtual machine, wherein the first service virtual machine and the second service virtual machine in communication with each other through the first hypervisor of the first hypervisor type and the second hypervisor of the second hypervisor type to virtualize the plurality of storage devices as a global resource pool, and wherein the first service virtual machine communicates with the second service virtual machine to request relinquishment of ownership of a virtual disk managed by the second service virtual machine.
2. The system of claim 1, wherein a virtual disk is formed from the plurality of storage devices and the virtual disk can be accessed by both the first set of one or more user virtual machines and the second set of one or more user virtual machines.
3. The system of claim 1, wherein the first service virtual machine includes a hypervisor interface.
4. The system of claim 3, wherein the hypervisor interface configures the first service virtual machine to communicate with the first hypervisor.
5. The system of claim 4, wherein configuring the first service virtual machine to communicate with the first hypervisor comprises:
- discovering the first hypervisor type; and
- choosing a library of commands for communicating with the first hypervisor based at least in part on the first hypervisor type.
6. The system of claim 5, further comprising obtaining hardware, performance and configuration information from the first hypervisor using the library of commands.
7. The system of claim 6, wherein the hardware, performance and configuration information obtained from the first hypervisor comprises:
- user virtual machines and service virtual machines running on the system;
- performance statistics associated with the system; or
- configuration of the plurality of storage devices.
8. The system of claim 1, wherein the first hypervisor and the second hypervisor are of different types chosen from a set comprising: VMware, ESX(i), Microsoft Hyper-V, or RedHat KVM.
9. The system of claim 3, wherein the hypervisor interface is configured to monitor the system during run-time.
10. The system of claim 3, wherein the hypervisor interface dynamically discovers the first hypervisor type by communicating with the first hypervisor using various different hypervisor languages until a match is found.
11. A system comprising at least a processor for storage device management having a plurality of nodes that implement a virtualization environment for accessing a plurality of storage devices, wherein:
- a node of the plurality of nodes comprises a first hypervisor, an instantiation of a first service virtual machine, and one or more user virtual machines located above the first hypervisor;
- the plurality of storage devices is accessed by the one or more user virtual machines via the instantiation of the first service virtual machine;
- wherein the instantiation of the first service virtual machine is configurable to communicate with the first hypervisor regardless of hypervisor type,
- wherein the first service virtual machine can communicate with another instantiation of a second service virtual machine through the first hypervisor of a first hypervisor type of the first service virtual machine and a second hypervisor of a second hypervisor type of the second service virtual machine to virtualize the plurality of storage devices as a global resource pool, and
- wherein the instantiation of the first service virtual machine communicates with the instantiation of the second service virtual machine within the system to request relinquishment of ownership of a virtual disk managed by the second service virtual machine.
12. The system of claim 11, wherein the instantiation of the first service virtual machine comprises a hypervisor interface for configuring the instantiation of the first service virtual machine to communicate with the first hypervisor regardless of hypervisor type.
13. The system of claim 12, wherein configuring the instantiation of the first service virtual machine to communicate with the first hypervisor comprises:
- discovering the first hypervisor type; and
- choosing a library of commands for communicating with the first hypervisor based at least in part on the first hypervisor type.
14. The system of claim 13, further comprising obtaining hardware, performance and configuration information from the first hypervisor using the library of commands.
15. The system of claim 14, wherein the hardware and configuration information obtained from the first hypervisor comprises:
- user virtual machines and service virtual machines running on the node;
- performance statistics associated with the node; or
- configuration of the plurality of storage devices.
16. The system of claim 11, wherein the first hypervisor is of a type chosen from a set comprising: VMware, ESX(i), Microsoft Hyper-V, or RedHat KVM.
17. The system of claim 11, wherein a hypervisor interface is configured to monitor the system during run-time.
18. The system of claim 12, wherein the hypervisor interface dynamically discovers the first hypervisor type by communicating with the first hypervisor using various different hypervisor languages until a match is found.
19. The system of claim 11, wherein the instantiation of the first service virtual machine can be replaced with another instantiation of a service virtual machine upon failure.
20. The system of claim 11, wherein the instantiation of the first service virtual machine can support various communication protocols.
4980822 | December 25, 1990 | Brantley, Jr. |
6834386 | December 21, 2004 | Douceur et al. |
7035881 | April 25, 2006 | Tummala et al. |
7360034 | April 15, 2008 | Muhlestein et al. |
7970851 | June 28, 2011 | Ponnappan et al. |
8089795 | January 3, 2012 | Rajan et al. |
8099572 | January 17, 2012 | Arora et al. |
8145842 | March 27, 2012 | Shiga |
8194674 | June 5, 2012 | Pagel et al. |
8407265 | March 26, 2013 | Scheer et al. |
8413146 | April 2, 2013 | McCorkendale et al. |
8438346 | May 7, 2013 | Gold |
8539484 | September 17, 2013 | Offer et al. |
8566821 | October 22, 2013 | Robinson et al. |
8683153 | March 25, 2014 | Long et al. |
8949829 | February 3, 2015 | |
9256475 | February 9, 2016 | Aron et al. |
20020002448 | January 3, 2002 | Kampe |
20020091574 | July 11, 2002 | Lefebvre et al. |
20020124137 | September 5, 2002 | Ulrich et al. |
20020161889 | October 31, 2002 | Gamache et al. |
20020184239 | December 5, 2002 | Mosher et al. |
20030046369 | March 6, 2003 | Sim et al. |
20030154236 | August 14, 2003 | Dar et al. |
20030202486 | October 30, 2003 | Anton et al. |
20040107227 | June 3, 2004 | Michael |
20040139128 | July 15, 2004 | Becker et al. |
20040148380 | July 29, 2004 | Meyer |
20040221089 | November 4, 2004 | Sato et al. |
20050065985 | March 24, 2005 | Tummala et al. |
20050102672 | May 12, 2005 | Brothers |
20060005189 | January 5, 2006 | Vega et al. |
20060106999 | May 18, 2006 | Baldwin |
20060112093 | May 25, 2006 | Lightstone et al. |
20060123062 | June 8, 2006 | Bobbitt et al. |
20060126918 | June 15, 2006 | Oohashi et al. |
20060155930 | July 13, 2006 | Birrell |
20070050767 | March 1, 2007 | Grobman et al. |
20070156955 | July 5, 2007 | Royer, Jr. et al. |
20070239942 | October 11, 2007 | Rajwar et al. |
20070244938 | October 18, 2007 | Michael et al. |
20080183973 | July 31, 2008 | Aguilera et al. |
20080189468 | August 7, 2008 | Schmidt |
20080189700 | August 7, 2008 | Schmidt et al. |
20080196043 | August 14, 2008 | Feinleib et al. |
20080201709 | August 21, 2008 | Hodges |
20080222234 | September 11, 2008 | Marchand |
20080244028 | October 2, 2008 | Le et al. |
20080263407 | October 23, 2008 | Yamamoto |
20080270564 | October 30, 2008 | Rangegowda et al. |
20080282047 | November 13, 2008 | Arakawa et al. |
20090172165 | July 2, 2009 | Rokuhara et al. |
20090172660 | July 2, 2009 | Klotz et al. |
20090183159 | July 16, 2009 | Michael et al. |
20090222542 | September 3, 2009 | Miyajima |
20090259759 | October 15, 2009 | Miyajima |
20090300660 | December 3, 2009 | Solomon et al. |
20090313391 | December 17, 2009 | Watanabe |
20100037243 | February 11, 2010 | Mo et al. |
20100070470 | March 18, 2010 | Milencovici et al. |
20100070725 | March 18, 2010 | Prahlad et al. |
20100106907 | April 29, 2010 | Noguchi |
20100115174 | May 6, 2010 | Akyol et al. |
20100125842 | May 20, 2010 | Friedman et al. |
20100138827 | June 3, 2010 | Frank et al. |
20100153514 | June 17, 2010 | Dabagh |
20100161908 | June 24, 2010 | Nation et al. |
20100162039 | June 24, 2010 | Goroff et al. |
20100174820 | July 8, 2010 | Banga et al. |
20100228903 | September 9, 2010 | Chandrasekaran |
20100235831 | September 16, 2010 | Dittmer |
20100251238 | September 30, 2010 | Schuba et al. |
20100262586 | October 14, 2010 | Rosikiewicz et al. |
20100275198 | October 28, 2010 | Jess et al. |
20100281166 | November 4, 2010 | Buyya et al. |
20100299368 | November 25, 2010 | Hutchins et al. |
20100332889 | December 30, 2010 | Shneorson et al. |
20110010515 | January 13, 2011 | Ranade |
20110061050 | March 10, 2011 | Sahita et al. |
20110071983 | March 24, 2011 | Murase |
20110099551 | April 28, 2011 | Fahrig et al. |
20110145418 | June 16, 2011 | Pratt |
20110145534 | June 16, 2011 | Factor et al. |
20110145916 | June 16, 2011 | Mckenzie et al. |
20110154318 | June 23, 2011 | Oshins et al. |
20110179413 | July 21, 2011 | Subramanian |
20110184993 | July 28, 2011 | Chawla et al. |
20110185292 | July 28, 2011 | Chawla et al. |
20110202920 | August 18, 2011 | Takase |
20110208909 | August 25, 2011 | Kawaguchi |
20110239213 | September 29, 2011 | Aswani et al. |
20110245724 | October 6, 2011 | Flatland et al. |
20110258404 | October 20, 2011 | Arakawa et al. |
20110314469 | December 22, 2011 | Qian et al. |
20110320556 | December 29, 2011 | Reuther |
20120002535 | January 5, 2012 | Droux et al. |
20120005307 | January 5, 2012 | Das et al. |
20120011505 | January 12, 2012 | Fujisaki et al. |
20120030676 | February 2, 2012 | Smith et al. |
20120036134 | February 9, 2012 | Malakhov |
20120054746 | March 1, 2012 | Vaghani |
20120079229 | March 29, 2012 | Jensen et al. |
20120084445 | April 5, 2012 | Brock et al. |
20120096211 | April 19, 2012 | Davis |
20120096461 | April 19, 2012 | Goswami et al. |
20120102006 | April 26, 2012 | Larson et al. |
20120102491 | April 26, 2012 | Maharana |
20120117320 | May 10, 2012 | Pinchover |
20120117555 | May 10, 2012 | Banerjee |
20120124186 | May 17, 2012 | Emerson |
20120137098 | May 31, 2012 | Wang et al. |
20120144229 | June 7, 2012 | Nadolski |
20120158674 | June 21, 2012 | Lillibridge |
20120167079 | June 28, 2012 | Banerjee et al. |
20120167080 | June 28, 2012 | Vilayannur |
20120167082 | June 28, 2012 | Kumar et al. |
20120167085 | June 28, 2012 | Subramaniyan et al. |
20120179874 | July 12, 2012 | Chang et al. |
20120215970 | August 23, 2012 | Shats |
20120221529 | August 30, 2012 | Rosikiewicz et al. |
20120233668 | September 13, 2012 | Leafe et al. |
20120254862 | October 4, 2012 | Dong |
20120265959 | October 18, 2012 | Le et al. |
20120266165 | October 18, 2012 | Cen et al. |
20120272240 | October 25, 2012 | Starks et al. |
20120284712 | November 8, 2012 | Nimmagadda et al. |
20120291021 | November 15, 2012 | Banerjee |
20130007219 | January 3, 2013 | Sorenson et al. |
20130013865 | January 10, 2013 | Venkatesh |
20130014103 | January 10, 2013 | Reuther et al. |
20130055259 | February 28, 2013 | Dong |
20130061014 | March 7, 2013 | Prahlad et al. |
20140006731 | January 2, 2014 | Uluski et al. |
WO 2010117294 | October 2010 | WO |
- Nigmandjanovich et al, “Policy-based dynamic resource allocation for virtual machines on Xen-enabled virtualization environment”, IEEE, 2010, pp. 353-355.
- Bolte et al, “Non-intrusive Virtualization Management using Libvert”, EDAA 2010, pp. 574-577.
- Non-final Office Action dated Aug. 26, 2014 for U.S. Appl. No. 13/744,668.
- Corrected Notice of Allowance dated Sep. 5, 2014 for U.S. Appl. No. 13/207,357.
- Non-final Office Action dated Sep. 26, 2014 for U.S. Appl. No. 13/947,668.
- Non-final Office Action dated Oct. 2, 2014 for U.S. Appl. No. 13/564,511.
- Non-final Office Action dated Oct. 16, 2014 for U.S. Appl. No. 13/744,683.
- Final Office Action dated Nov. 3, 2014, for U.S. Appl. No. 13/744,649.
- Final Office Action dated Nov. 3, 2014, for U.S. Appl. No. 13/744,655.
- Non-final Office Action dated Nov. 6, 2014, for U.S. Appl. No. 13/744,693.
- Notice of Allowance and Fees Due dated Aug. 23, 2013 for U.S. Appl. No. 13/207,345.
- Non-final Office Action dated Dec. 26, 2012 for U.S. Appl. No. 13/207,365.
- Non-final Office Action dated Jan. 24, 2013 for U.S. Appl. No. 13/207,371.
- Thekkath et al., “Frangipani: A Scalable Distributed File System”, SOSP, 1997, 24 pages.
- Mendel Rosenblum, “The Design and Implementation of a Log-structured File System”, SOSP, 1991, 101 pages.
- Birrell et al., “A Simple and Efficient Implementation for Small Databases”, Aug. 11, 1987, 17 pages.
- Terry et al., “Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System”, Association for Computing Machinery, 1995, 12 pages.
- Mike Burrows, “The Chubby lock service for loosely-coupled distributed systems”, OSDI 2006 Paper, Google Inc., Sep. 6, 2006, 23 pages.
- Lee et al., “Petal: Distributed Virtual Disks” The Proceedings of the 7th International Conference on Architectural Support for Programming Languages and Operating Systems, 1996, 9 pages.
- Dean et al., “MapReduce: Simplified Data Processing on Large Clusters”, OSDI 2004, 6th Symposium on Operating Systems Design and Implementation, Google Inc, Oct. 3, 2004, 25 pages.
- Chang et al., “Bigtable: A Distributed Storage System for Structured Data” OSDI 2006 Paper, 7th USENIX Symposium on Operating Systems Design and Implementation, 2006, 20 pages.
- Ghemawat et al., “The Google File System”, SOSP 2003, ACM, Bolton Landing, NY, 2003, 15 pages.
- Zhu et al., “Avoiding the Disk Bottleneck in the Data Domain Deduplication File System”, Data Domain, Inc., 2008, 15 pages.
- DeCandia et al., “Dynamo: Amazon's Highly Available Key-value Store”, Proceedings of the 21st ACM Symposium on Operating Systems Principles, Oct. 2007, 16 pages.
- Project Voldemort, A distributed database, Jan. 9, 2013 url: http://www.project-voldemort.com/voldemort/.
- Bernstein et al., “Concurrency Control and Recovery in Database Systems”, Addison-Wesley 1987, 35 pages.
- Weikum et al., “Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery”, 2002, 91 pages.
- Timothy L. Harris, “A Pragmatic Implementation of Non-Blocking Linked-Lists”, 2001, 15 pages.
- Lakshman et al., “Cassandra—A Decentralized Structured Storage System”, LADIS, 2009, 6 pages.
- “Open iSCSI”, Project: Open-iSCSI—RFC3720 architecture and implementation, 2005 url: http://www.open-iscsi.org.
- “Chunkfs”, Aug. 22, 2009 url: http://valerieaurora.org/chunkfs/.
- “Hypertable”, Hypertable Inc., Oct. 28, 2012. url: http://hypertable.org/.
- MacCormick et al., “Boxwood: Abstractions as the Foundation for Storage Infrastructure” OSDI 2004, Microsoft Research Silicon Valley, 31 pages.
- Robert Hagmann, “Reimplementing the Cedar File System Using Logging and Group Commit”, ACM, 1987, 8 pages.
- Sherman et al., “ACMS: The Akamai Configuration Management System” NSDI, 2005, 22 pages.
- Petersen et al., “Flexible Update Propagation for Weakly Consistent Replication”, SOSP, 1997, 19 pages.
- Banga et al., “Resource containers: A new facility for resource management in server systems” Proceedings of the 3rd USENIX Symposium on ODSI, 1999, 15 pages.
- F. Mattern, “Virtual Time and Global States of Distributed Systems” Proceedings of the International Workshop on Parallel and Distributed Algorithms, 1989, 15 pages.
- Maged M. Michael, “High Performance Dynamic Lock-Free Hash Tables and List-Based Sets” SPAA 2002, Aug. 2002, 10 pages.
- Welsh et al., “SEDA: An Architecuture for Well-Conditioned, Scalable Internet Services” Proceedings of the Eighteenth Symposium on Operating Systems Principles, Oct. 2001, 15 pages.
- Notice of Allowance and Fees Due dated Apr. 8, 2013 for U.S. Appl. No. 13/207,365.
- Non-final Office Action dated Apr. 11, 2013 for U.S. Appl. No. 13/207,345.
- Final Office Action dated Jun. 27, 2013 for U.S. Appl. No. 13/207,371.
- Non-final Office Action dated Jan. 27, 2014 for U.S. Appl. No. 13/571,188.
- Non-final Office Action dated Apr. 24, 2014, for U.S. Appl. No. 13/744,655.
- Notice of Allowance and Fees Due dated May 23, 2014 for U.S. Appl. No. 13/207,357.
- Non-Final Office Action dated May 16, 2014 for U.S. Appl. No. 13/744,649.
- Notice of Allowance dated Jun. 11, 2014 for U.S. Appl. No. 13/571,188.
- Notice of Allowance and Fees Due dated Nov. 21, 2014, for U.S. Appl. No. 13/744,703.
- Notice of Allowance and fees Due dated Dec. 1, 2014 for U.S. Appl. No. 13/207,371.
- Final Office Action dated Jan. 30, 2015 for U.S. Appl. No. 13/564,511.
- Notice of Allowance and Fee(s) Due dated Feb. 6, 2015 for U.S. Appl. No. 13/744,683.
- Final Office Action dated Feb. 12, 2015 for U.S. Appl. No. 13/744,668.
- Non-final Office Action dated Feb. 23, 2015 for U.S. Appl. No. 13/744,649.
- Non-final Office Action dated Mar. 5, 2015 for U.S. Appl. No. 13/744,655.
- Non-final Office Action dated Mar. 11, 2015 for U.S. Appl. No. 13/551,291.
- Non-final Office Action dated Mar. 12, 2015 for U.S. Appl. No. 14/514,326.
- Final Office Action dated Apr. 9, 2015 for U.S. Appl. No. 13/744,693.
- Notice of Allowance and Fee(s) Due dated Apr. 22, 2015 for U.S. Appl. No. 14/500,730.
- Non-final Office Action dated Apr. 27, 2015 for U.S. Appl. No. 14/500,752.
- Final Office Action dated May 5, 2015 for U.S. Appl. No. 13/947,668.
- Non-final Office Action dated May 6, 2015 for U.S. Appl. No. 13/564,511.
- Final Office Action dated Aug. 26, 2015 for related U.S. Appl. No. 13/744,655.
- Notice of Allowance and Fee(s) due dated Sep. 15, 2015 for related U.S. Appl. No. 14/500,730.
- Non-final Office Action dated Sep. 17, 2015, for related U.S. Appl. No. 13/744,649.
- Notice of Allowance and fees due dated Sep. 30, 2015, for related U.S. Appl. No. 14/514,326.
- Final Office Action dated Oct. 1, 2015 for related U.S. Appl. No. 13/551,291.
- Notice of Allowance and Fees due dated Oct. 2, 2015 for related U.S. Appl. No. 13/744,668.
- Final Office Action dated Nov. 20, 2015 for related U.S. Appl. No. 13/564,511.
- Non-final Office Action dated Jun. 1, 2015 for U.S. Appl. No. 13/744,668.
- Final Office Action dated Jun. 5, 2015 for U.S. Appl. No. 13/744,649.
- Non-final Office Action dated Aug. 25, 2015 for related U.S. Appl. No. 13/744,693.
- Final Office Action dated Jan. 6, 2016 for related U.S. Appl. No. 14/500,752.
- Corrected Notice of Allowance dated Jan. 11, 2016 for U.S. Appl. No. 14/500,730.
- Notice of Allowability dated Jan. 20 2016 for related U.S. Appl. No. 13/947,668.
- Non-final Office Action dated Jan. 25, 2016 for related U.S. Appl. No. 13/744,655.
- Non-final Office Action dated Feb. 12, 2016 for related U.S. Appl. No. 13/551,291.
- Notice of Allowance and Fee(s) Due dated Mar. 8, 2016 for related U.S. Appl. No. 13/744,649.
- Non-final Office Action dated Mar. 10, 2016 for related U.S. Appl. No. 13/564,511.
- Final Office Action dated Mar. 28, 2016 for related U.S. Appl. No. 13/744,693.
- Notice of Allowance and Fee(s) due dated Jun. 14, 2016 for related U.S. Appl. No. 13/744,655.
- Final Office Action dated Aug. 12, 2016 for related U.S. Appl. No. 13/551,291.
- Notice of Allowance and Fee(s) due dated Sep. 6, 2016 for related U.S. Appl. No. 13/744,693.
- Notice of Allowance and Fee(s) due dated Sep. 26, 2016 for related U.S. Appl. No. 13/744,655.
- Final Office Action dated Oct. 25, 2016 for related U.S. Appl. No. 13/564,511.
- Non-final Office Action dated Dec. 13, 2016 for related U.S. Appl. No. 13/551,291.
- Notice of Allowance and Fees Due dated Apr. 51 2017 for related U.S. Appl. No. 13/564,511.
Type: Grant
Filed: Mar 14, 2013
Date of Patent: May 16, 2017
Assignee: NUTANIX, INC. (San Jose, CA)
Inventors: Prakash Narayanasamy (Santa Clara, CA), Venkata Ranga Radhanikanth Guturi (San Jose, CA), Mohit Aron (Los Altos, CA), Dheeraj Pandey (San Ramon, CA), Ajeet Singh (Cupertino, CA)
Primary Examiner: Charles Swift
Application Number: 13/830,116
International Classification: G06F 9/455 (20060101); G06F 9/46 (20060101); G06F 15/173 (20060101); G06F 13/10 (20060101); G06F 13/00 (20060101);