Method and system for using boot servers in networks

A method and system for booting a server and/or server blade in a network is provided. The system includes, a boot server that is used to store plural WWPNs, an active profile for the server and a boot schedule, wherein a HBA registers a default WWPN and/or HBA profile with the boot server and if the HBA is configured to boot using a management application, the boot server provides a WWPN to the HBA. The management application includes, a graphical user interface for creating a LUN for a storage system and assigning the LUN to be a boot LUN, wherein the graphical user interface can access a boot server for booting a server.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority. under 35 U.S.C. § 119(e) (1) to the provisional patent application, Ser. No. 60/565,060 filed on Apr. 26, 2004, the disclosure of which is incorporated herein by reference in its entirety.

BACKGROUND

1. Field of the Invention

The present invention relates to storage area networks, and more particularly, to managing boot LUNs in a storage area network.

2. Background of the Invention

Storage area networks (“SANs”) are commonly used to store and access data. A SAN is a high-speed sub-network of shared storage devices, for example, disks and tape drives. A computer system (may also be referred to as a “host”) can access data stored in the SAN.

Typical SAN architecture makes storage devices available to all servers that are connected using a computer network, for example, a local area network or a wide area network. The term server in this context means any computing system or device coupled to a network that manages network resources. For example, a file server is a computer and storage device dedicated to storing files. Any user on the network can store files on the server. A print server is a computer that manages one or more printers, and a network server is a computer that manages network traffic. A database server is a computer system that processes database queries.

Various components and standard interfaces are used to move data from host systems to storage devices in a SAN. Fibre Channel is one such standard. Fibre Channel (incorporated herein by reference in its entirety) is an American National Standard Institute (ANSI) set of standards, which provides a serial transmission protocol for storage and network protocols such as HIPPI, SCSI (small computer system interface), IP, ATM and others. Fibre Channel provides an input/output interface to meet the requirements of both channel and network users.

Host systems often communicate with storage systems via a host bus adapter (“HBA”) using the “PCI” bus interface. PCI stands for Peripheral Component Interconnect, a local bus standard that was developed by Intel Corporation®. The PCI standard is incorporated herein by reference in its entirety. PCI is a 64-bit bus and can run at clock speeds of 33 or 66 MHz.

PCI-X is a standard bus (incorporated herein by reference in its entirety) that is compatible with existing PCI cards using the PCI bus. PCI-X improves the data transfer rate of PCI from 132 MBps to as much as 1 GBps. The PCI-X standard was developed by IBM®, Hewlett Packard Corporation® and Compaq Corporation® to increase performance of high bandwidth devices, such as Gigabit Ethernet standard and Fibre Channel Standard, and processors that are part of a cluster.

The iSCSI standard (incorporated herein by reference in its entirety) is based on Small Computer Systems Interface (“SCSI”), which enables host computer systems to perform block data input/output (“I/O”) operations with a variety of peripheral devices including disk and tape devices, optical storage devices, as well as printers and scanners. A traditional SCSI connection between a host system and peripheral device is through parallel cabling and is limited by distance and device support constraints. For storage applications, iSCSI was developed to take advantage of network architectures based on Fibre Channel and Gigabit Ethernet standards. iSCSI leverages the SCSI protocol over established networked infrastructures and defines the means for enabling block storage applications over TCP/IP networks. iSCSI defines mapping of the SCSI protocol with TCP/IP.

The iSCSI architecture is based on a client/server model. Typically, the client is a host system such as a file server that issues a read or write command. The server may be a disk array that responds to the client request. Devices that request I/O processes are called initiators. Targets are devices that perform operations requested by initiators. Each target can accommodate up to a certain number of devices, known as logical units, and each is assigned a Logical Unit Number (LUN). LUN(s) throughout this specification means a logical unit number, which is a unique identifier, on a Parallel SCSI or Fiber Channel or iSCSI target.

Boot LUNs are used to boot servers in a SAN environment. In conventional systems, to boot from a Fibre Channel device, each HBA needs to be configured with the name of the boot device. To configure the HBAs, one has to evaluate each server and store either the port name or port identifier and LUN number of the target device.

Conventional systems manually associate a HBA's worldwide port number (“WWPN”) provided by the HBA manufacturer to a server blade. The WWPN is manually entered for LUN masking. “CTRL Q” utility is used for each blade's WWPN and to identify boot LUNs. Conventional systems are manual and tedious since boot information is not available at a single point.

The problem becomes worse in a cluster environment. In a clustered environment, the system may be required to boot to different operating system partitions on the same blade, i.e., different boot LUNs have to be enabled for the same blade and the boot LUNs need to be protected from each other.

Conventional systems do not provide an efficient methodology to manage boot LUNs and the boot process itself. Therefore, there is a need for a method and system for efficiently managing the boot process in SANs.

SUMMARY OF THE INVENTION

In one aspect of the present invention, a method for booting a server and/or server blade in a network is provided. The method includes, registering a default world wide port number information (“WWPN”) and/or HBA Profile with a boot server; returning a WWPN and/or HBA profile that the server needs to use for booting, wherein a switch returns the WWPN and/or HBA profile; querying the boot server for a list of boot devices; and returning a list of boot devices to the server, wherein the switch returns the list.

The host bus adapter uses the device list to configure a first available boot device from the list. The boot server includes a boot schedule for booting the server.

In yet another aspect of the present invention, a system having a server and a storage system with a switch that allows communication between the server and the storage system is provided. The system includes, a boot server that is used to store plural WWPNs, an active profile for the server and a boot schedule, wherein a HBA registers a default WWPN and/or HBA profile with the boot server and if the HBA is configured to boot using a management application, the boot server provides a WWPN to the HBA.

In yet another aspect of the present invention, a management application for configuring a storage area network is provided. The management application includes, a graphical user interface for creating a LUN for a storage system and assigning the LUN to be a boot LUN, wherein the graphical user interface can access a boot server for booting a server.

This brief summary has been provided so that the nature of the invention may be understood quickly. A more complete understanding of the invention can be obtained by reference to the following detailed description of the preferred embodiments thereof concerning the attached drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features and other features of the present invention will now be described with reference to the drawings of a preferred embodiment. In the drawings, the same components have the same reference numerals. The illustrated embodiment is intended to illustrate, but not to limit the invention. The drawings include the following Figures:

FIGS. 1A and 1B show top-level block diagrams of a SAN;

FIG. 2 shows a block diagram of a management utility used according to one aspect of the present invention;

FIG. 3A shows a process flow diagram for installing a server blade, according to one aspect of the present invention;

FIG. 3B shows a top-level block diagram for a Fibre Channel switch module;

FIG. 4 shows a flow diagram for configuring a SAN, according to one aspect of the present invention;

FIG. 5 shows a flow diagram for using a boot server, according to one aspect of the present invention; and

FIG. 6A-6E show screen shots for a management utility application for setting up a boot LUN, according to one aspect of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The following definitions are provided as they are typically (but not exclusively) used in the Fibre Channel environment, implementing the various adaptive aspects of the present invention.

“Blade”: A module in a Fibre Channel switch.

“Fibre Channel ANSI Standard”: The standard describes the physical interface, transmission and signaling protocol of a high performance serial link for support of other high level protocols associated with IPI, SCSI, IP, ATM and others.

“Fabric”: A system which interconnects various ports attached to it and is capable of routing Fibre Channel frames by using destination identifiers provided in FC-2 frame headers.

“Fabric Topology”: This is a topology where a device is directly attached to a Fibre Channel fabric that uses destination identifiers embedded in frame headers to route frames through a Fibre Channel fabric to a desired destination.

“Port”: A general reference to N. Sub.—Port or F.Sub.—Port.

To facilitate an understanding of the preferred embodiment, the general architecture and operation of a SAN using storage devices will be described. The specific architecture and operation of the preferred embodiment will then be described with reference to the general architecture.

SAN:

FIG. 1A shows a top-level block diagram of a SAN 100. SAN 100 includes plural server blades 113 coupled to Fibre Channel switch blades 114 (individual Fibre Channel switch blade shown as 103 and may be referred to as a “switch blade 103”) that are coupled to a Fibre Channel fabric 104. A single server blade 102A includes local drive (or storage) 129 and a central processing unit 102B.

Storage 129 may store operating system program files, application program files (management application 203, according to one aspect of the present invention), and other files. Some of these files are stored on storage 129 using an installation program. For example, CPU 102B executes computer-executable process steps of an installation program so that CPU 102B can properly execute an application program.

Storage sub-system (may also be referred to as “FC-Storage”) 108 is also coupled to fabric 104 and accessible to server blades 113. Storage sub-system 108 includes plural storage devices (tape drives, disks or any other storage media) 109-112 with Data LUNs 107. Boot LUNS 105 are also coupled to storage sub-system 108 to store boot information.

FIG. 1B shows a host system 101A (similar to server blade 102A) with system memory 101. Host system 101A is coupled to storage subsystem 108 and 108A via HBA 106. HBA 106 is operationally coupled to switch blade 103 and fabric 104.

It is noteworthy that host system 101A, as referred to herein, may include a computer, server or other similar devices, which may be coupled to storage systems. Host system 101A can access random access memory (“RAM” for example, memory 101), and read only memory (“ROM”) (not shown), and includes other components to communicate with various SAN modules.

When executing stored computer-executable process steps from storage 129, CPU 102A stores and executes the process steps out of RAM. ROM is provided to store invariant instruction sequences such as start-up instruction sequences or basic input/output operating system (BIOS) sequences.

HBA 106 includes various components to facilitate data transfer between server blades and storage sub-systems 108 and 108A. HBA 106 includes processors (may also be referred to as “sequencers”) for receive and transmit side, respectively for processing data received from storage sub-systems 108 and 108A and transmitting data to storage sub-systems 108 and 108A. Transmit path in this context means data path from host memory 101 to the storage systems via HBA 106. Receive path means data path from storage subsystem via adapter 106.

Beside dedicated processors on the receive and transmit path, HBA 106 also includes a central processor, which may be a reduced instruction set computer (“RISC”) for performing various functions.

HBA 106 also includes Fibre Channel interface (also referred to as Fibre Channel protocol manager “FPM”) in receive and transmit paths, respectively to interface with switch blade 103.

HBA 106 is also coupled to an external memory, which is used to move data to and from host 101A. HBA 106 also has non-volatile memory (not shown) to store BIOS information.

Management Utility Application:

FIG. 2 shows a management utility application that is used for managing boot servers, according to one aspect of the present invention. In one aspect of the present invention, virtual disk service (“VDS”) architecture 200 and the storage network industry association (“SNIA”) initiative “SMS-S” is used to provide a graphical user interface for efficiently managing storage area networks via management application 203. SMS-S specification, incorporated herein by reference in its entirety, provides a common interface for implementing management functionality.

Microsoft Corporation® that markets Windows Server 2003® and Windows Storage Server 2003® provides a virtual disk service (“VDS”) program for managing storage configurations under Microsoft Server Operating Systems.

It is noteworthy that the adaptive aspects of the present invention described herein are not limited to VDS architecture 200 or any industry standard.

VDS architecture 200 includes disks 205 and drives 208 that are coupled to software interface 207 and hardware interface layer 210 that are coupled to VDS 201. VDS architecture 200 allows storage hardware vendors to write hardware specific code that is then translated into VDS hardware interface 210. Software interface 207 also provides a vendor independent interface.

Disk management utility 202, management application 203 and command line interface utility 204 allow a SAN vendor to use application-programming interfaces (“APIs”) to build applications/solutions for managing SANs. Management application 203 may be used to build vendor specific application.

Details of management application 203 for creating and managing LUNs are provided in the provisional patent application Ser. No. 60/565,060. Management application 203 is coupled to a boot server 202A that interfaces with a Fibre Channel switch (for example 103, or switch blades 114). Boot server 202A may be stored in switch blade 103 memory.

Boot server 202A includes a list of WWPN that an HBA can use; S_ID; an active profile of the server that is to be re-booted and a boot schedule when a server(s) needs to be re-booted. The WWPN list maps to a list of boot devices and their associated LUNs.

Boot server 202A also includes information that identifies the server blade in case of bladed server or a server itself. This includes chassis identifier and server blade location and/or system service tag number or a system serial number that uniquely identifies a server. Boot server 202A includes an active boot profile that allows booting a server to a certain system profile.

Management application 203 can query the Fibre Channel switch 103 to find out HBA profiles for the use of a server boot process. Management application 203 programs the Boot device list in the boot server located in the Fibre Channel switch 103 for each HBA profile. A HBA's BIOS logs into boot server 202A and requests the FC switch 103 to provide a list of boot devices configured for its use. Details of using the boot server 202A are provided below.

Fibre Channel Switch Blade 103:

Fibre Channel switches may use multiple modules (also referred to as “blades”) connected by Fibre Channel ports. A multi-module switch is integrated as a single switch and appears to other devices in the Fibre Channel fabric as a single switch.

Fibre Channel Switch blade 103 is a Fibre Channel switch module, which is a multi-port device where each port manages a simple point-to-point connection between itself and its attached system. Each port can be attached to a server, peripheral, I/O subsystem, bridge, hub, router, or even another switch. Messages are received from one port and automatically routed to another port. Multiple calls or data transfers happen concurrently.

Fibre Channel switch blade 103 uses memory buffers to hold frames received and sent across a network. Associated with these buffers are credits, which are the number of frames that a buffer can hold per fabric port.

FIG. 3B shows a top-level block diagram of switch blade 103. Switch blade 103 includes plural external ports (F_Ports operationally coupled to other devices; or E_Ports coupled to other switch modules) 300A through 300D; and internal ports 301A-301D that operate under a multi-blade protocol.

Switch module 103A includes processor 302 to control certain switch functionality. Processor 302 can access memory 303 via a bus (not shown). In one aspect of the present invention, memory 303 can store Name Server data 304 and boot server information 202A.

It is noteworthy that boot server 202A may be located at any other location. The present adaptive aspects of the present invention are not limited to locating the boot server 202A in switch blade 103.

Fibre Channel Generic Services (FC-GS-3) specification describes in section 5.0 various Fibre Channel services that are provided by Fibre Channel switches including using a “Name Server” to discover Fibre Channel devices coupled to a fabric.

A Name server provides a way for N_Ports and NL_Ports to register and discover Fibre Channel attributes. Request for Name server commands are carried over the Common Transport protocol, also defined by FC-GS-3. The Name server information is distributed among fabric elements and is made available to N_Ports and NL_Ports after the ports have logged in.

Various commands are used by the Name Server protocol, as defined by FC-GS-3, for registration, de-registration and queries. Fiber Channel Switched Fabric (FC-SW-2) specification describes how a Fabric consisting of multiple switches implements a distributed Name Server.

Process Flow Diagrams:

FIG. 3A shows a process flow diagram for installing a server in a SAN 115. In step S300, a server blade (or the server itself, i.e. host 101A) is installed and the Fibre Channel ports (in HBA 106) are coupled to a Fibre Channel switch (for example, 103).

In step S302, the server (or the server blade in case of bladed servers) is booted. In step S304, the server (via HBA 106 BIOS) registers with Name Server 304 and boot server 202A. The name property includes the default WWPN, Port identifier (“Port ID”) and the Server property. The server property includes name, location and serial number of the server blade. In one aspect, HBA 106 BIOS calls into server system BIOS and retrieves the Serial Number of the server chassis and slot location of the server blade (if applicable). HBA 106 may compute a WWPN or use the default WWPN that is provided by the HBA 106 manufacturer and stored in HBA memory (for example, non-volatile random access memory).

In step S306, the server boots to an empty shell, if there is no operating system.

FIG. 4 shows a flow diagram of process steps for configuring a SAN for proper boot volumes for the servers that are a part of the SAN. In step S400, the process performs a discovery to identify initiator HBAs and target devices through the switch Name Server 304.

In step S402, an HBA is related to a server (or server blade) based on the name property registered with the Boot server 202A under the WWPN of the HBA. In step S404, the hierarchy of the chassis-server blade HBA is displayed. In step S406, management application 203 creates LUNS. A user can specify a LUN to be a boot LUN (600, FIG. 6C).

In step S408, management application 203 registers the boot LUN with the boot server 202A under the WWPN of the HBA.

Server Re-Boot:

FIG. 5 shows a flow diagram for re-booting a server using boot server 202A, according to one aspect of the present invention. The reboot process begins in step S500 by accessing the boot server blade. In step S502, HBA 106 BIOS retrieves the chassis serial number and blade position from server BIOS. In step S504, HBA 106 BIOS in HBA 106 registers itself with the Name Server 304 and boot server 202A with a default WWPN value and server blade properties.

In step S506, the process determines if HBA 106 is already configured for boot. If yes, then in step S508, switch blade 103 returns a WWPN that HBA 106 needs to use based on a boot schedule. In step S510, using the new WWPN, HBA 106 BIOS registers with the Name Server 304 and queries boot server 202A for a list of boot devices. In step S512, Switch 103 returns a list of boot devices. In step S514, HBA 106 BIOS uses a device list to configure the first available boot device from the list. In step S516, the boot process continues.

If in step S506, HBA 106 is not already configured for boot, then in step S518, switch blade 103 returns a status indicating there is no boot. In step S520, HBA 106 BIOS does not configure any boot device. However, since HBA 106 registered with the boot server, the profile information is now accessible to management application 203 for future boot operations.

FIGS. 6A-6E show a graphical user interface accessible via management application 203 for creating a boot LUN, according to one aspect of the present invention. FIG. 6A shows a sub-system 600A (IBM FAStT200) for which a LUN is created. FIG. 6B shows a screen shot where the LUN can be configured using the “Configure and Add” option 600B. FIG. 6C shows in window 602 the HBAs that are allowed to access the new LUN. Also, a user can check block 600 to specify the new LUN to be used as a boot LUN.

FIG. 6D shows the LUN listing with associated HBAs in window 603. FIG. 6E shows in window 604 the various LUNs that are created.

In one aspect of the present invention, boot server is easily accessible using a management application. Manual tedious entries are not required to create boot LUNs.

It is noteworthy that although the foregoing illustrations have been provided with respect to Fibre Channel based SANs, this concept of including the boot server information in a switch may be used in any other protocol as well.

Although the present invention has been described with reference to specific embodiments, these embodiments are illustrative only and not limiting. Many other applications and embodiments of the present invention will be apparent in light of this disclosure and the following claims.

Claims

1. A method for booting a server and/or server blade in a network, comprising:

registering a default world wide port number information (“WWPN”)and/or HBA Profile with a boot server;
returning a WWPN and/or HBA profile that the server needs to use for booting, wherein a switch returns the WWPN and/or HBA profile;
querying the boot server for a list of boot devices; and
returning a list of boot devices to the server, wherein the switch returns the list.

2. The method of claim 1, wherein a host bus adapter registers the WWPN returned by the switch.

3. The method of claim 2, wherein the host bus adapter uses the device list to configure a first available boot device from the list.

4. The method of claim 1, wherein the boot server includes a boot schedule for booting the server.

5. A system having a server and a storage system with a switch that allows communication between the server and the storage system, comprising:

a boot server that is used to store plural world wide port numbers, an active profile for the server and a boot schedule, wherein a host bus adapter (“HBA”) registers a default WWPN and/or HBA profile with the boot server and if the HBA is configured to boot using a management application, the boot server provides a WWPN and/or HBA profile to the HBA.

6. The system of claim 5, wherein the HBA uses the WWPN it received from the boot server to register with a name server and query the boot server for a list of boot devices.

7. The system of claim 6, wherein the switch returns a list of boot devices in response to the query.

8. The system of claim 7, wherein the HBA uses the list to configure a boot device.

9. A management application for configuring a storage area network, comprising:

a graphical user interface for creating a LUN for a storage system and assigning the LUN to be a boot LUN, wherein the graphical user interface can access a boot server for booting a server.

10. The management application of claim 9, wherein the boot server is used to store plural world wide port numbers, an active profile for the server and a boot schedule, wherein a host bus adapter (“HBA”) registers a default WWPN and/or HBA profile with the boot server and if the HBA is configured to boot using the management application, the boot server provides a WWPN to the HBA.

11. The management application of claim 10, wherein the HBA uses the WWPN it received from the boot server to register with a name server and query the boot server for a list of boot devices.

12. The management application of claim 11, wherein a switch returns a list of boot devices in response to the query.

Patent History
Publication number: 20060047852
Type: Application
Filed: Oct 1, 2004
Publication Date: Mar 2, 2006
Patent Grant number: 7930377
Inventors: Shishir Shah (Irvine, CA), Edward McGlaughlin (Minneapolis, MN)
Application Number: 10/957,465
Classifications
Current U.S. Class: 709/245.000
International Classification: G06F 15/16 (20060101);