USER INTERFACE FOR MANAGING A DISTRIBUTED VIRTUAL SWITCH
A user interface for managing allocations of network resources in a virtualized computing environment provides a graphical overview of the virtual computing environment that allows the user to visualize the virtual network, including the connections between the virtual network adapters and the uplink port groups that provide physical network resources for the virtual machines included in the virtualized computing environment. The user interface also provides graphical elements that allow the user to modify the virtual network, to migrate virtual machines from individual virtual switches to a distributed virtual switch, and/or to modify the arrangement of physical network adapters that provide network backing for the virtual machines. By providing these features, the user interface according to one or more embodiments of the present invention can allow the user to efficiently and safely manage the virtual network in the virtual computing environment.
Latest VMWARE, INC. Patents:
- CLOUD NATIVE NETWORK FUNCTION DEPLOYMENT
- LOCALIZING A REMOTE DESKTOP
- METHOD AND SYSTEM TO PERFORM COMPLIANCE AND AVAILABILITY CHECK FOR INTERNET SMALL COMPUTER SYSTEM INTERFACE (ISCSI) SERVICE IN DISTRIBUTED STORAGE SYSTEM
- METHODS AND SYSTEMS FOR USING SMART NETWORK INTERFACE CARDS TO SECURE DATA TRANSMISSION OF DISAGGREGATED HARDWARE
- METHODS AND SYSTEMS FOR INTELLIGENT ROAMING USING RADIO ACCESS NETWORK INTELLIGENT CONTROLLERS
This application claims benefit of U.S. provisional patent application Ser. No. 61/334,214, filed on May 13, 2010, the entire contents of which are incorporated by reference herein.
BACKGROUNDComputer virtualization is a technique that involves encapsulating a physical computing machine platform into a virtual machine that is executed under the control of virtualization software running on a single hardware computing platform (also referred to herein as “host system” or “host computer”). A group of hardware computing platforms may be organized as a cluster to provide the hardware resources for virtual machines. In a data center that employs virtual machines, it is common to see hundreds, even thousands, of virtual machines running on multiple clusters of host systems.
A virtualization management software is used by an administrator to manage the configuration of the virtual machines and the allocation of computing resources to the virtual machines. Because of the large number of virtual machines to be managed within a single data center, and sometimes, across multiple data centers, some of the administrator's tasks are automated. For example, software automated techniques such as dynamic resource scheduling and dynamic power management have been developed to assist the administrator in balancing workloads across host systems and powering host systems ON and OFF as needed.
One feature of the virtualized computing environment that is controlled by the virtualization management software is virtual networking. Each virtual machine includes a software-based virtual network adapter that is logically connected to a physical network adapter included in a host computer that provides network access for the virtual machine. The virtual network adapter is connected to the physical network adapter through a software-based “switch.” However, when a large number of virtual machines is included in the virtual computing environment, managing the virtual network connections can become time consuming and error prone for the administrator.
Accordingly, there remains a need in the art for a user interface for managing a virtualized computing environment that addresses the drawbacks and limitations discussed above.
SUMMARYOne or more embodiments of the invention provide a user interface for managing allocations of network resources in a virtualized computing environment. The user interface provides a graphical overview of the virtual computing environment that allows the user to visualize the virtual network, including the connections between the virtual network adapters and the uplink port groups that provide physical network resources for the virtual machines included in the virtualized computing environment. The user interface also provides graphical elements that allow the user to modify the virtual network, to migrate virtual machines from individual virtual switches to a distributed virtual switch, and/or to modify the arrangement of physical network adapters that provide network backing for the virtual machines. By providing these features, the user interface according to one or more embodiments of the present invention can allow the user to efficiently and safely manage the virtual network in the virtual computing environment.
One embodiment provides a technique for managing networking resources in a virtualized computing environment that includes associating one or more uplink port groups with a distributed virtual switch that is logically connected to two or more host computers; associating one or more physical network adapters included in the two or more host computers with each of the one or more uplink port groups; and establishing a logical connection between one or more virtual machines executing on the two or more host computers and the one or more uplink port groups.
A virtual machine (VM) management center 102 is also included in the system 100. The VM management center 102 manages the virtual infrastructure, including managing the host computers 104, the virtual machines running within each host computer 104, provisioning, migration, resource allocations, and so on.
According to various embodiments, implementing a virtualized system simplifies management with a management application, such as the Virtual Infrastructure (VI) Client 106, that can be used to perform tasks. Each server configuration task, such as configuring storage and network connections or managing the service console, can be accomplished centrally through the VI Client 106. One embodiment provides a stand-alone application version of the VI Client 106. In another embodiment, a web browser application 108 provides virtual machine management access from any networked device. For example, with the browser version of the client 108, giving a user access to a virtual machine can be as simple as providing a URL (Uniform Resource Locator) to the user.
According to some embodiments, user access controls of the VM management center 102 provide customizable roles and permissions so an administrator can create roles for various users by selecting from an extensive list of permissions to grant to each role. Responsibilities for specific virtualized infrastructure components, such as resource pools, can be delegated based on business organization or ownership. VM management center 102 can also provide full audit tracking to provide a detailed record of every action and operation performed on the virtual infrastructure. As described in greater detail herein, embodiments of the invention provide a user interface for the VI Client 106 that allows a user to manage a distributed virtual switch (DVS).
The virtual machines VM 121-123 run on top of a virtual machine monitor 125, which is a software interface layer that enables sharing of the hardware resources of host computer 104 by the virtual machines. Virtual machine monitor 125 may run on top of the operating system of the host computer 104 or directly on hardware components of the host computer 104. In some embodiments, virtual machine monitor 125 runs on top of a hypervisor that is installed on top of the hardware resources of host computer 104. Together, the virtual machines 121-123 and virtual machine monitor 125 create virtualized computer systems that give the appearance of being distinct from host computer 104 and from each other. Each virtual machine includes a guest operating system and one or more guest applications. The guest operating system is a master control program of the virtual machine and, among other things, the guest operating system forms a software platform on top of which the guest applications run.
In one embodiment, data storage for host computer 104 is served by a storage area network (SAN) (not shown), which includes a storage array (e.g., a disk array) and a switch (SAN fabric) that connects host computer 104 to storage array 160 via the disk interface 116. In virtualized computer systems, in which disk images of virtual machines are stored in the storage arrays, disk images of virtual machines can be migrated between storage arrays as a way to balance the loads across the storage arrays. For example, the Storage VMotion™ product that is available from VMware Inc. of Palo Alto, Calif. allows disk images of virtual machines to be migrated between storage arrays without interrupting the virtual machine whose disk image is being migrated or any applications running inside it. In other embodiments, any technically feasible data storage implementation, other than a SAN, can be used to provide storage resources for host computer 104.
Virtual switches 204-1, 204-2 are software-based devices that exist in the virtual machine kernel on the respective host computer. A virtual switch is a software construct of a physical switch that allows multiple entities, such as VMs to communicate with each other and the outside world using a single physical network connection.
Many configuration options exist for virtual switches. A user, such as an administrator, can assign virtual local area networks (VLANs), security profiles, and/or limit the amount of traffic that virtual machines can generate. Additionally, the user can assign multiple physical NICs from the host computer to a virtual switch for load balancing and fault tolerance. As described, each host computer can include one or more NICs, also called “network adapters” or “uplink adapters.”
As described, the VMs connect to virtual switches. The virtual switches, in turn, connect to physical NICs in the host computers. The physical NICs connect to the physical network. Virtual switches can have many similarities with physical switches. For example, virtual switches include varying number of ports to connect to VMs, offer support for VLANs, can have varying port speeds, and/or can offer security policies.
In some embodiments, virtual switches perform three different functions for a host computer, including (1) virtual machine connection, (2) VM kernel connection, and (3) a service console. Each of these functions is considered a different connection type or port.
Virtual machine ports connect the VMs with each other and the outside world. Each VM connects to a port on one or more virtual switches. Any physical NICs that are assigned to the virtual switch provide a bridge to the physical network. VM kernel ports connect the VMs to various services, such as networking services, IP (Internet Protocol) storage services, Internet Small Computer System Interface (iSCSI) services, and disk image migrations. The service console port provides access to host computer management services. A VI client can connect to the service console to configure and manage the host computer.
However, several problems arise when using multiple virtual switches, as shown in
Accordingly, embodiments of the invention provide for a distributed virtual switch that is coupled to multiple host computers.
In some embodiments, a DVS 304, as shown in
Additional features provided by the DVS 304 include, simplified provisioning and administration of virtual networking across many hosts and clusters through a centralized interface, simplified end-to-end physical and virtual network management through third-party virtual switch extensions, enhanced provisioning and traffic management capabilities through private VLAN support and bi-directional virtual machine rate-limiting, enhanced security and monitoring for virtual machines migrations, prioritized controls between different traffic types, and/or load-based dynamic adjustment across a team of physical adapters on the distributed virtual switch.
In one embodiment, the one or more physical NICs included in the one or more host computers can be organized into “uplinks,” also referred to as “uplink ports.” An uplink is a set of one or more physical NICs that connect to one or more VMs organized in a virtual network, or VLAN. As shown in the example in
Embodiments of the invention provide a user interface for managing the physical NICs included in each of the uplinks.
In one embodiment, the user interface shown in
The user interface shown in
Referring back to
As described,
In yet another embodiment, the DVS architecture can be displayed in a user interface that shows the “status” of the various VMs connected to the DVS switch.
As described above in
The user can select a link 804 to upgrade one or more VMs to the DVS. Selecting the link causes a dialog box to be displayed, as shown in
On some occasions, migrating a VM to the DVS may cause unexpected errors in the networking environment. Accordingly, the dialog box 808 also provides a mechanism for users to “downgrade” one or more VMs from the DVS back to the individual virtual switches. The user can select one or more of the VMs that are connected to the DVS and then select the “Downgrade” link 816. The selected VMs are then automatically migrated back to the individual virtual switches.
The user interfaces shown in
As described above, individual port groups, VM networks, or VMs can have associated management policies. Examples of management polices include a load balancing policy (i.e., a policy for managing traffic through a network element), a network failover detection policy, a notification policy (e.g., an Address Resolution Protocol (ARP) notification can be transmitted to the physical NIC to update its MAC address lookup table), a rolling failover policy, a rolling failover policy (i.e., a policy that determines what should occur when a failed adapter comes back online), and/or a failover order policy (i.e., a policy that indicates the order in which network adapters should shut down). The various policies can be set at the DVS level or at the physical NIC level, but can also overridden at the port group level, or even further down at the port level. In one embodiment, a user interface is provided that displays to the user the level at which the policy was set and the level at which the policy is being overridden.
As shown, the method 1000 begins at step 1002, where a processing unit, such as the processing unit that executes the VI client 106, receives a selection to create a DVS. The selection may be made by a user selecting a link or a button to create a DVS. At step 1004, the processing unit defines a number of uplink port groups to be included in the DVS. In one embodiment, at least one uplink port group is automatically created by default. The user can also input a user selection to create additional uplink port groups.
At step 1006, the processor defines which physical adapters included in one or more hosts correspond to the defined uplink port group(s). A graphical user interface can be displayed that allows the user to manually select which physical adapters (i.e., physical NICs) included in the various host computers should be associated with which uplink port groups. In some embodiments, an uplink profile can be established that automatically associates physical adapters to the uplink port groups. For example, assume there are four uplink port groups included in the DVS and four host computers that provide network backing for the DVS. Each host computer may have six physical adapters. An uplink profile can be established that provides that one physical adapter from each of the four host computers is assigned to each of the four uplink port groups. Accordingly, each uplink port group would include four physical adapters, one from each host computer. Also, each host computer would have four of six physical adapters assigned to uplink port groups, with two physical adapters available for other purposes.
At step 1008, the processor establishes a connection between a virtual switch associated with a virtual machine and a physical adapter included in at least one uplink port group. One or more VMs may be included in the virtual computing environment. Step 1008 can be repeated for each VM to establish a connection between the VMs and at least one physical adapter. In some embodiments, a VM can be connected to more than one physical adapter, providing for additional bandwidth. The plurality of physical adapters to which the VM is connected may be included in the same uplink port group or in different uplink port groups. In some embodiments, the user is not required to manually establish the connections between the VMs and the physical adapters. Instead, the processor automatically connects the VMs to the physical adapters.
As shown, the method 1100 begins at step 1102, where a processing unit, such as the processing unit that executes the VI client 106, displays an indication that a portion of the VMs included in a virtual computing environment have been migrated from individual switches to a DVS. In other words, some of the VMs are still using legacy individual virtual switches. In one embodiment, the indication comprises a status bar, as shown in
At step 1106, the processor displays a list of VMs that have not been migrated to the DVS. In one embodiment, the list of VMs can be displayed in a separate dialog box, as shown in the example in
As shown, the method 1200 begins at step 1202, where a processing unit, such as the processing unit that executes the VI client 106, displays a graphical node corresponding to a DVS. The graphical node can be a rectangular box, as shown in
At step 1204, the processor displays virtual adapters associated with one or more VMs on one side of the graphical node. For example, the virtual adapters associated with one or more VMs can be displayed on the left side of the graphical node. As shown in
At step 1204, the processor displays physical adapters associated with one or more host computers on another side of the graphical node. For example, the physical adapters associated with one or more host computers can be displayed on the right side of the graphical node. As shown in
At step 1204, the processor displays one or more paths through the graphical node corresponding to connections between the virtual adapters and the physical adapters. In some embodiments, the user can select various portions of the display interface to visualize, or “highlight,” portions of the virtual networking environment. For example, if the user selects a virtual adapter, then the corresponding physical adapter, as well as the path through the graphical node corresponding to the DVS, can be highlighted. If the user selects a physical adapter, then the corresponding virtual adapters corresponding to one or more VMs, as well as the path through the graphical node corresponding to the DVS, can be highlighted. If the user selects a portion of a path through the DVS, then the corresponding virtual adapters and physical adapters connected to the node can be highlighted.
In further embodiments, the graphical view of the DVS can be organized so that the VMs are displayed arranged by status or bandwidth usage.
In sum, one or more embodiments of the invention provide a user interface for managing a distributed virtual switch. Virtual network adapters associated with one or more virtual machines are logically connected to one or more physical network adapters included in one or more host computers. In one embodiment, the physical network adapters can be organized in uplink port groups. The user interface provides a graphical overview of the virtual computing environment that allows the user to visualize the virtual network, including the connections between the virtual network adapters and the uplink port groups. The user interface also provides a technique for the user to quickly and safely modify the virtual network to migrate virtual machines from individual virtual switches to a distributed virtual switch and/or to modify the arrangement of physical network adapters that provide network backing for the virtual machines.
The various embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities usually, though not necessarily, these quantities may take the form of electrical or magnetic signals where they, or representations of them, are capable of being stored, transferred, combined, compared, or otherwise manipulated. Further, such manipulations are often referred to in terms, such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments of the invention may be useful machine operations. In addition, one or more embodiments of the invention also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for specific required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), a CD (Compact Discs), CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although one or more embodiments of the present invention have been described in some detail for clarity of understanding, it will be apparent that certain changes and modifications may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein, but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
In addition, while described virtualization methods have generally assumed that virtual machines present interfaces consistent with a particular hardware system, persons of ordinary skill in the art will recognize that the methods described may be used in conjunction with virtualizations that do not correspond directly to any particular hardware system. Virtualization systems in accordance with the various embodiments, implemented as hosted embodiments, non-hosted embodiments, or as embodiments that tend to blur distinctions between the two, are all envisioned. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
Many variations, modifications, additions, and improvements are possible, regardless the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest operating system that performs virtualization functions. Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention(s). In general, structures and functionality presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claims(s).
Claims
1. A graphical user interface for a virtualized computing environment including a plurality of host computers, each having one or more virtual machines running therein and configured with a distributed virtual switch for managing network resources associated with the one or more virtual machines, said graphical user interface comprising:
- a first section for organizing the one or more virtual machines running on the plurality of host computers;
- a second section for organizing one or more physical network adapters included in the plurality of host computers into one or more uplink port groups that each define a set of physical network adapters that provide physical network resources for a set of virtual machines; and
- a third section corresponding to the distributed virtual switch that illustrates logical connections between the one or more virtual machines and the one or more uplink port groups.
2. The graphical user interface of claim 1, wherein each uplink port group includes at least one physical network adapter from each of the plurality of host computers.
3. The graphical user interface of claim 1, wherein a first physical network adapter is automatically associated with a first uplink port group based on an uplink profile that defines which physical network adapters are assigned to which uplink port group.
4. The graphical user interface of claim 1, further comprising a fourth section that allows a user to modify which physical network adapters are associated with which uplink port groups.
5. The graphical user interface of claim 1, further including a fourth section that includes an indication that one or more virtual machines are not logically connected to the distributed virtual switch.
6. The graphical user interface of claim 5, wherein the indication is a status bar or textual indication.
7. The graphical user interface of claim 5, further including a fifth section that includes a list of the one or more virtual machines that are not logically connected to the distributed virtual switch, wherein a user can select a set of virtual machines from the list of one or more virtual machines and cause the virtual machines included in the set of virtual machines to become logically connected to the distributed virtual switch.
8. The graphical user interface of claim 1, wherein a selection of a first virtual machine from the first section causes one or more physical network adapters included in the second section that are logically connected to the first virtual machine to be displayed with visual distinction.
9. The graphical user interface of claim 1, wherein a selection of a first physical network adapter from the second section causes one or more virtual machines included in the first section that are logically connected to the first physical network adapter to be displayed with visual distinction.
10. A non-transitory computer-readable storage medium comprising instructions that, when executed in a computing device, enable a graphical user interface to be displayed, wherein the graphical user interface is for a virtualized computing environment including a plurality of host computers, each having one or more virtual machines running therein and configured with a distributed virtual switch for managing network resources associated with the one or more virtual machines, said graphical user interface comprising:
- a first section for organizing the one or more virtual machines running on the plurality of host computers;
- a second section for organizing one or more physical network adapters included in the plurality of host computers into one or more uplink port groups that each define a set of physical network adapters that provide physical network resources for a set of virtual machines; and
- a third section corresponding to the distributed virtual switch that illustrates logical connections between the one or more virtual machines and the one or more uplink port groups.
11. The computer-readable storage medium of claim 10, wherein each uplink port group includes at least one physical network adapter from each of the plurality of host computers.
12. The computer-readable storage medium of claim 10, wherein a first physical network adapter is automatically associated with a first uplink port group based on an uplink profile that defines which physical network adapters are assigned to which uplink port group.
13. The computer-readable storage medium of claim 10, wherein the graphical user interface further includes a fourth section that allows a user to modify which physical network adapters are associated with which uplink port groups.
14. The computer-readable storage medium of claim 10, wherein the graphical user interface further includes a fourth section that includes an indication that one or more virtual machines are not logically connected to the distributed virtual switch.
15. The computer-readable storage medium of claim 14, wherein the indication is a status bar or textual indication.
16. The computer-readable storage medium of claim 14, further including a fifth section that includes a list of the one or more virtual machines that are not logically connected to the distributed virtual switch, wherein a user can select a set of virtual machines from the list of one or more virtual machines and cause the virtual machines included in the set of virtual machines to become logically connected to the distributed virtual switch.
17. The computer-readable storage medium of claim 10, wherein a selection of a first virtual machine from the first section causes one or more physical network adapters included in the second section that are logically connected to the first virtual machine to be displayed with visual distinction.
18. The computer-readable storage medium of claim 10, wherein a selection of a first physical network adapter from the second section causes one or more virtual machines included in the first section that are logically connected to the first physical network adapter to be displayed with visual distinction.
19. A method for configuring a virtualized computing environment including a plurality of host computers, each having one or more virtual machines running therein and configured with a distributed virtual switch for managing network resources associated with the one or more virtual machines, said method comprising:
- designating one or more uplink port groups to be associated with the distributed virtual switch, wherein each of the one or more uplink port groups provides physical network resources for a set of virtual machines;
- designating one or more physical network adapters included in the plurality of host computers to be associated with each of the one or more uplink port groups; and
- establishing a logical connection between one or more virtual machines executing on the plurality of host computers and the one or more uplink port groups.
20. The method of claim 19, further comprising:
- modifying a management policy setting of a first physical network adapter included in a first uplink port group; and
- causing an indication that said management policy setting was previously set at an uplink port group level to be displayed.
Type: Application
Filed: Feb 7, 2011
Publication Date: Nov 17, 2011
Applicant: VMWARE, INC. (Palo Alto, CA)
Inventors: Kathryn MURRELL (San Francisco, CA), Karen Natalie WONG (San Carlos, CA)
Application Number: 13/022,100
International Classification: G06F 9/455 (20060101);