Logical partitioning in redundant systems
A plurality of processing nodes in a storage system are partitioned into a plurality of logical processing units, wherein the plurality of logical processing units can respond to I/O requests from a host coupled to the storage system. At least two logical processing units are grouped, wherein data in a first storage coupled to a first logical processing unit of the least two logical processing units is mirrored by data in a second storage coupled to the second logical processing unit of the at least two logical processing units. In response to a failure of the first logical processing unit, an I/O request from the host is responded to via the second logical processing unit.
Latest IBM Patents:
This application is a continuation of application Ser. No. 10/675,323 filed on Sep. 29, 2003, which is incorporated herein by reference in its entirety.
BACKGROUND1. Field
The present disclosure relates to a method, system, and an article of manufacture for logical partitioning in redundant systems.
2. Description of Related Art
Redundant information technology systems, including storage systems, may store the same data in multiple nodes, where a node may be a computational unit, a storage unit, etc. When one node of a redundant system is unavailable, an alternate node of the redundant system may be used to substitute the unavailable node.
An enterprise storage server (ESS), such as the IBM* TotalStorage Enterprise Storage Server*, maybe a disk storage server that includes one or more processors coupled to storage devices, including high capacity scalable storage devices, Redundant Array of Independent Disks (RAID), etc. The ESS may be connected to a network and include features for copying data in storage systems. An ESS that includes a plurality of nodes, where a node may have a plurality of processors, may be used as a redundant information technology system. *IBM, IBM TotalStorage Enterprise Storage Server, Enterprise System Connection (ESCON), OS/390 are trademarks of International Business Machines Corp.
In ESS units that have a plurality of nodes, a pair of nodes may provide redundancy. For example, one node may be referred to as a primary node and another node may be referred to as a secondary node. If the primary node fails, the secondary node takes over and performs the functions of the primary node.
In many redundant systems that use a primary node and a secondary node to provide redundancy, entire nodes may fail. However, the failure of an entire node, especially in situations where the failed node includes multiple central processing units (CPUs), can cause system performance to degrade.
SUMMARYProvided are a method, system, and article of manufacture, wherein a plurality of processing nodes in a storage system are partitioned into a plurality of logical processing units, and wherein the plurality of logical processing units can respond to I/O requests from a host coupled to the storage system. At least two logical processing units are grouped, wherein data in a first storage coupled to a first logical processing unit of the least two logical processing units is mirrored by data in a second storage coupled to the second logical processing unit of the at least two logical processing units. In response to a failure of the first logical processing unit, an IFO request from the host is responded to via the second logical processing unit.
In further embodiments, the storage system has at least two processing nodes, wherein the plurality of logical processing units are distributed across the at least two processing nodes, wherein one processing node includes a plurality of central processing units, and wherein in the event of the failure of the first logical processing unit, the plurality of processing nodes stay operational.
In additional embodiments, an administrative console is coupled to the plurality of processing nodes of the storage system. Information on processing requirements, memory requirements and host bus adapter requirements for the plurality of logical processing units are processed at the administrative console prior to partitioning,
In yet additional embodiments, one or more partitioning applications are coupled to the plurality of logical processing units. In response to grouping the at least two logical processing units, initial program load of the first logical processing unit is started. The one or more partitioning applications determine an identification of the second logical processing unit grouped with the first logical processed unit. The one or more partitioning applications present common resources to the first and second logical processing units.
In further embodiments, a request for memory access of a logical processing unit is received from the first logical processing unit. One or more partitioning applications coupled to the plurality of logical processing units determine whether the logical processing unit is grouped with the first logical processing unit. If the logical processing unit is grouped with the first logical processing unit, then the memory access of the logical processing unit is allowed to the first logical processing unit. If the logical processing unit is not grouped with the first logical processing unit, then the memory access of the logical processing unit is prevented to the first logical processing unit.
In still further embodiments, a write request is received from the host to the plurality of processing nodes in the storage system. One or more partitioning applications write data corresponding to the write request to the first storage coupled to the first logical processing unit and the second storage coupled to the second logical processing unit.
In yet additional implementations, a read request is received from the host to the plurality of processing nodes in the storage system. One or more partitioning applications read data corresponding to the read request from the first storage coupled to the first logical processing unit.
In further implementations, the partitioning and grouping are performed by one or more partitioning applications coupled to the plurality of processing nodes, wherein the one or more partitioning applications comprise a hypervisor application of a redundant system.
The implementations create a plurality of logical processing units from a plurality of processing nodes in a storage system. A pair of logical processing units form a redundant system, where if logical processing unit is unavailable, the other logical processing unit may be used to substitute the unavailable logical processing unit. In certain embodiments, in the event of a failure of a logical processing unit, the processing node that includes the failed logical processing unit continues to operate.
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several implementations. It is understood that other implementations may be utilized and structural and operational changes may be made without departing from the scope of the present implementations.
The ESS unit 100 may include two nodes 108, 110, where a node maybe a processing node, such as, a computational unit. A node may have one or more central CPUs and may be partitioned into one or more logical processing units, such as, virtual machines, by a partitioning application. For example, the node 108 may have one or more CPUs 112, and the node 108 may be partitioned into the virtual machines 114a . . . 114n by a partitioning application 116. Similarly, the node 110 may have one or more CPUs 118, and the node 110 may be partitioned into virtual machines 120a . . . 120n by a partitioning application 122. A virtual machine, such as virtual machines 114a . . . 114n, 120a . . . 120n, may appear as computational unit to the host 102.
While the ESS unit 100 is shown as including two nodes 108 and 110, in alternative embodiments the ESS unit 100 may include a fewer or a larger number of nodes. For example, in certain embodiments the ESS unit 100 may comprise of only one node partitioned into a plurality of virtual machines, and in certain embodiments the ESS unit 100 may comprise of three nodes where the nodes may be partitioned into one or more virtual machines. In alternative embodiments, instead of the ESS unit 100, other computational or storage systems may be used, where the other computational or storage systems are partitioned into a plurality of virtual machines.
In certain embodiments, the host 102, and the nodes 108, 110 may be a device such as a personal computer, a workstation, a server, a mainframe, a hand held computer, a palm top computer, a telephony device, network appliance, etc. The host 102 may include any operating system (not shown), such as the IBM OS/390* operating system. The host 102 may also include at least one host application 124 that sends Input/Output (I/O) requests to the ESS unit 100. *IBM, IBM TotalStorage Enterprise Storage Server, Enterprise System Connection (ESCON), OS/390 are trademarks of International Business Machines Corp.
The host bus adapters 104a . . . 104m operate over Enterprise System Connection (ESCON)* channels or any other data interface mechanism (e.g., fibre channel, Storage Area Network (SAN) interconnections, etc.) and may allow communication between the host 102 and the plurality of virtual machines 114a . . . 114n, 120a . . . 120n. For example, in certain embodiments, the host bus adapter 104a may allow the host 102 to communicate with the virtual machines 114a, 120a, and the host bus adapter 104b may allow the host 102 to communicate with the virtual machines 114b, 120b. *IBM, IBM TotalStorage Enterprise Storage Server, Enterprise System Connection (ESCON), OS/390 are trademarks of International Business Machines Corp.
The administrative console 106, may be a device, such as a personal computer, a workstation, a server, a mainframe, a hand held computer, a palm top computer, a telephony device, network appliance, etc., that is used to administer the ESS unit 100. In certain embodiments, the administrative console 106 may include an administrative application 126 that is used to configure the ESS unit 100.
Therefore,
Partner virtual machines are comprised of two virtual machines, where a virtual machine may be referred to as a partner of the other virtual machine. For example, partner virtual machines 200b are comprised of virtual machines 114b and 120b. Therefore, virtual machine 114b is a partner virtual machine of virtual machine 120b and vice versa.
Data in a first storage coupled to a first virtual machine of partner virtual machines is mirrored by data in a second storage coupled to a second virtual machine of the partner virtual machines. For example, data in storage of virtual machine 114a may be mirrored by data in storage of virtual machine 120a. The first virtual machine may be referred to as a primary virtual machine and the second virtual machine may be referred to as a secondary virtual machine. The virtual machines can respond to I/O requests from the host 102, sent by the host application 124 via the host bus adapters 104a . . . 104m to the ESS unit 100.
Therefore,
Control starts at block 300, where the administrative application 126 on the administrative console 106 processes data on CPU requirements, memory requirements, host bus adapter requirements, etc., for virtual machines in the nodes 108, 110. In certain embodiments, such data on CPU requirements, memory requirements, host bus requirements, etc., may be entered by configuration files created by a user on the administrative console 106 or entered directly by the user.
Based on the data on CPU requirements, memory requirements, host bus requirements, etc., the partitioning applications 116, 122 define (at block 302) the plurality of virtual machines 114a . . . 114n, 120a . . . 120n for the nodes 108, 110. For example, the partitioning application 116 may define the plurality of virtual machines 114a . . . 114n for the node 108 and the partitioning application 122 may define the plurality of virtual machines 120a . . . 120n for the node 110.
The partitioning applications 116, 122 associate (at block 304) a pool number with the virtual machines in a node. For example, the partitioning application 116 may associate pool numbers that numerically range from 1 to n, for virtual machines 114a . . . 114n in node 108. The partitioning application 122 may associate the same pool numbers that numerically range from 1 to n, for virtual machines 120a . . . 120n in node 110.
The partitioning applications 116, 122 assign (at block 306) virtual machines with the same pool number in either nodes 108, 110 to be partner virtual machines. For example, if pool number one has been associated with virtual machines 114a and 120a, then virtual machines 114a and 120a are assigned to be partner virtual machines, such as partner virtual machines 200a.
Therefore, the logic of
Control starts at block 400, where the partitioning applications 116, 122 start the IPL for a virtual machine, such as, virtual machine 114a . . . 114n, 120a . . . 120n. The partitioning applications 116, 122 provide (at block 402) the identification and destination of the partner virtual machine to the virtual machine that is undergoing IPL. For example, if the virtual machine 114a is undergoing IPL, then the identification and destination of the partner virtual machine 120a is provided to the virtual machines 114a.
The partitioning applications 116, 122 determine (at block 404) if IPL is to be performed for anymore virtual machines. If not, the partitioning applications 116, 122 present (at block 406) common resources, such as shared adapters including host bus adapters 104a . . . 104m, to the virtual machines in partnership and the process for IPL stops (at block 408). For example, in certain embodiments, the partitioning applications 116, 122 may present the host bus adapter 104a to be shared between the virtual machines 114a and 120a, and the host bus adapter 104b to be shared between the virtual machines 114b and 120b.
If the partitioning applications 116, 122 determine (at block 404) that IPL has to be performed for more virtual machines, then control returns to block 400 where the partitioning applications 116, 122 initiate IPL for additional virtual machines.
Therefore, the logic of
Control starts at block 500, where a virtual machine that may be referred to as a requester virtual machine, requests memory access from another virtual machine. For example, requester virtual machines 114a may request memory access from virtual machine 120a. The partitioning applications 116, 122 determine (at block 502) if the another virtual machine whose memory access is requested is a partner virtual machine of the requester virtual machine. If so, then the partitioning applications 116, 122 allow (at block 504) memory access of the another virtual machine to the requester virtual machine. For example, if the requester virtual machine is 114a requests memory access from the partner virtual machine 120a the memory access is allowed.
If the partitioning applications 116, 122 determine (at block 502) that the another virtual machine whose memory access is requested is not a partner virtual machine of the requester virtual machine then the partitioning applications 116, 122 does not allow the memory access and returns (at block 506) an error.
Therefore,
Control starts at block 600, where a virtual machine that may be referred to as a requester virtual machine, generates a request for reinitializing program load for another virtual machine. For example, requester virtual machine 114a may request a reinitialized program load for virtual machine 120a. The partitioning applications 116, 122 determine (at block 602) if the another virtual machine whose reinitialized program load is requested is a partner virtual machine of the requester virtual machine. If so, then the partitioning applications 116, 122 allow (at block 604) the reinitialized program load of the another virtual machine to be controlled from the requester virtual machine. For example, if the requester virtual machine 114a requests reinitialized program load for the partner virtual machine 120a, then the reinitialized program load is allowed.
If the partitioning applications 116, 122 determine (at block 602) that the another virtual machine whose reinitialized program load is requested is not a partner virtual machine of the requester virtual machine then the partitioning applications 116, 122 do not allow the reinitialized program load and return (at block 606) an error.
Therefore,
Control starts at block 700, where the partitioning applications 116, 122 receive a notification of an event. The partitioning applications 116, 122 determine (at block 702) the type of the received event notification.
If the determined event is a write request, then the partitioning applications 116, 122 write (at block 704) data corresponding to the write request to a virtual machine and a partner of the virtual machine. For example, in response to a write request the partitioning applications 116, 122 may write data corresponding to the write request to virtual machines 114a and 120a that are partner virtual machines.
If the determined event is a read request, then the partitioning applications 116, 122 read (at block 706) data corresponding to the read request from the primary virtual machine of the partner virtual machines. For example, if virtual machine 114a is designated as a primary virtual machine and the data in storage of virtual machine 114a is mirrored in secondary virtual machine 120a, then the partitioning applications 116, 122 read data corresponding to the read request from the virtual machine 114a.
If the determined event is a failure event of a primary virtual machine, then the partitioning applications 116, 122 cause (at block 708) the partner of the failed primary virtual machine to take over the task of satisfying requests from the host 102. In certain embodiments, the nodes 108, 110 and the virtual machines except for failed virtual machine keep (at block 710) operating.
Therefore,
The implementations create a plurality of logical processing units, i.e., virtual machines, from a plurality of processing nodes. A pair of logical processing units form a redundant system, where if a logical processing unit is unavailable, the other logical processing unit may be used to substitute the unavailable logical processing unit. In certain embodiments, the processing node that includes the unavailable logical processing unit continues to operate. Therefore, the unavailability of a logical processing unit does not lead to shutdown of a processing node. Furthermore, logical processing units that are partners appear identical to the host. So the failure of one logical processing unit may not be apparent to the host, as the system may keep on functioning with the partner logical processing unit assuming the functions of the failed logical processing unit.
ADDITIONAL IMPLEMENTATION DETAILSThe described techniques may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The term “article of manufacture” as used herein refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium (e.g., magnetic storage medium, such as hard disk drives, floppy disks, tape), optical storage (e.g., CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, firmware, programmable logic, etc.). Code in the computer readable medium is accessed and executed by a processor. The code in which implementations are made may further be accessible through a transmission media or from a file server over a network. In such cases, the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the implementations, and that the article of manufacture may comprise any information bearing medium known in the art.
The logic of
Many of the software and hardware components have been described in separate modules for purposes of illustration. Such components may be integrated into a fewer number of components or divided into a larger number of components. Additionally, certain operations described as performed by a specific component may be performed by other components.
Therefore, the foregoing description of the implementations has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many implementations of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
Claims
1. A method, comprising:
- partitioning a plurality of processing nodes in a storage system into a plurality of logical processing units, wherein the plurality of logical processing units can respond to I/O requests from a host coupled to the storage system;
- grouping at least two logical processing units, wherein data in a first storage coupled to a first logical processing unit of the least two logical processing units is mirrored by data in a second storage coupled to the second logical processing unit of the at least two logical processing units;
- in response to a failure of the first logical processing unit, responding to an I/O request from the host via the second logical processing unit,
- receiving from the first logical processing unit, a request for memory access of a logical processing unit;
- determining, by one or more partitioning applications coupled to the plurality of logical processing units, whether the logical processing unit is grouped with the first logical processing unit;
- if the logical processing unit is grouped with the first logical processing unit, then allowing the memory access of the logical processing unit to the first logical processing unit; and
- if the logical processing unit is not grouped with the first logical processing unit, then preventing the memory access of the logical processing unit to the first logical processing unit.
2. The method of claim 1, wherein the storage system has at least two processing nodes, wherein the plurality of logical processing units are distributed across the at least two processing nodes, wherein one processing node includes a plurality of central processing units, and wherein in the event of the failure of the first logical processing unit, the plurality of processing nodes stay operational.
3. The method of claim 1, further comprising:
- receiving a write request from the host to the plurality of processing nodes in the storage system; and
- writing, by one or more partitioning applications, data corresponding to the write request to the first storage coupled to the first logical processing unit and the second storage coupled to the second logical processing unit.
4. The method of claim 1, wherein an administrative console is coupled to the plurality of processing nodes of the storage system, and wherein prior to partitioning, processing is performed at the administrative console of information on processing requirements, memory requirements and host bus adapter requirements for the plurality of logical processing units, wherein a plurality of partitioning applications are coupled to the plurality of logical processing units, the method further comprising:
- in response to grouping the at least two logical processing units, staffing initial program load of the first logical processing unit;
- determining via the plurality of partitioning applications an identification of the second logical processing unit grouped with the first logical processed unit; and
- presenting, by the plurality of partitioning applications, common resources to the first and second logical processing units, wherein a first partitioning application of the plurality of partitioning applications is present on a first processing node of the plurality of processing nodes and a second partitioning application of the plurality of partitioning applications is present on a second processing node of the plurality of processing nodes.
5. The method of claim 4, wherein:
- the logical processing units are virtual machines;
- defining a plurality of virtual machines for each processing node of the plurality of processing nodes;
- associating a set of pool numbers to a first set of virtual machines in the first processing node of the plurality of processing nodes;
- associating the set of pool numbers to a second set of virtual machines in the second processing node of the plurality of processing nodes;
- assigning those virtual machines that are in different processing nodes but have the same pool number to be partner virtual machines.
6. The method of claim 5, further comprising:
- receiving a read request from the host to the plurality of processing nodes in the storage system; and
- reading, by one or more partitioning applications, data corresponding to the read request from the first storage coupled to the first logical processing unit, wherein the first logical processing unit is a primary virtual machine.
7. The method of claim 1, wherein the partitioning and grouping are performed by one or more partitioning applications coupled to the plurality of processing nodes, wherein the one or more partitioning applications comprise a hypervisor application of a redundant system.
8. A system, comprising:
- a storage system;
- a plurality of processing nodes in the storage system;
- a memory; and
- a processor coupled to the memory, wherein the processor performs operations, the operations comprising:
- partitioning the plurality of processing nodes in the storage system into a plurality of logical processing units, wherein the plurality of logical processing units can respond to I/O requests from a host coupled to the storage system;
- grouping at least two logical processing units, wherein data in a first storage coupled to a first logical processing unit of the least two logical processing units is mirrored by data in a second storage coupled to the second logical processing unit of the at least two logical processing units;
- in response to a failure of the first logical processing unit, responding to an I/O request from the host via the second logical processing unit,
- receiving from the first logical processing unit, a request for memory access of a logical processing unit;
- determining, by one or more partitioning applications coupled to the plurality of logical processing units, whether the logical processing unit is grouped with the first logical processing unit;
- if the logical processing unit is grouped with the first logical processing unit, then allowing the memory access of the logical processing unit to the first logical processing unit; and
- if the logical processing unit is not grouped with the first logical processing unit, then preventing the memory access of the logical processing unit to the first logical processing unit.
9. The system of claim 8, wherein the storage system has at least two processing nodes, wherein the plurality of logical processing units are distributed across the at least two processing nodes, wherein one processing node includes a plurality of central processing units, and wherein in the event of the failure of the first logical processing unit, the plurality of processing nodes stay operational.
10. The system of claim 8, the operations further comprising:
- receiving a write request from the host to the plurality of processing nodes in the storage system; and
- writing, by one or more partitioning applications, data corresponding to the write request to the first storage coupled to the first logical processing unit and the second storage coupled to the second logical processing unit.
11. The system of claim 8, wherein the partitioning and grouping are performed by one or more partitioning applications coupled to the plurality of processing nodes, wherein the one or more partitioning applications comprise a hypervisor application of a redundant system.
12. The system of claim 8, wherein an administrative console is coupled to the plurality of processing nodes of the storage system, and wherein prior to partitioning, processing is performed at the administrative console of information on processing requirements, memory requirements and host bus adapter requirements for the plurality of logical processing units, wherein a plurality of partitioning applications are coupled to the plurality of logical processing units, the operations further comprising:
- in response to grouping the at least two logical processing units, starting initial program load of the first logical processing unit;
- determining via the plurality of partitioning applications an identification of the second logical processing unit grouped with the first logical processed unit; and
- presenting, by the plurality of partitioning applications, common resources to the first and second logical processing units, wherein a first partitioning application of the plurality of partitioning applications is present on a first processing node of the plurality of processing nodes and a second partitioning application of the plurality of partitioning applications is present on a second processing node of the plurality of processing nodes.
13. The system of claim 12, wherein:
- the logical processing units are virtual machines;
- defining a plurality of virtual machines for each processing node of the plurality of processing nodes;
- associating a set of pool numbers to a first set of virtual machines in the first processing node of the plurality of processing nodes;
- associating the set of pool numbers to a second set of virtual machines in the second processing node of the plurality of processing nodes;
- assigning those virtual machines that are in different processing nodes but have the same pool number to be partner virtual machines.
14. The system of claim 13, the operations further comprising:
- receiving a read request from the host to the plurality of processing nodes in the storage system; and
- reading, by one or more partitioning applications, data corresponding to the read request from the first storage coupled to the first logical processing unit, wherein the first logical processing unit is a primary virtual machine.
15. A computer readable storage medium, wherein code stored in the computer readable storage medium when executed by a processor causes operations, the operations comprising:
- partitioning a plurality of processing nodes in a storage system into a plurality of logical processing units, wherein the plurality of logical processing units can respond to I/O requests from a host coupled to the storage system;
- grouping at least two logical processing units, wherein data in a first storage coupled to a first logical processing unit of the least two logical processing units is mirrored by data in a second storage coupled to the second logical processing unit of the at least two logical processing units;
- in response to a failure of the first logical processing unit, responding to an I/O request from the host via the second logical processing unit,
- receiving from the first logical processing unit, a request for memory access of a logical processing unit;
- determining, by one or more partitioning applications coupled to the plurality of logical processing units, whether the logical processing unit is grouped with the first logical processing unit;
- if the logical processing unit is grouped with the first logical processing unit, then allowing the memory access of the logical processing unit to the first logical processing unit; and
- if the logical processing unit is not grouped with the first logical processing unit, then preventing the memory access of the logical processing unit to the first logical processing unit.
16. The computer readable storage medium of claim 15, wherein the storage system has at least two processing nodes, wherein the plurality of logical processing units are distributed across the at least two processing nodes, wherein one processing node includes a plurality of central processing units, and wherein in the event of the failure of the first logical processing unit, the plurality of processing nodes stay operational.
17. The computer readable storage medium of claim 15, the operations further comprising:
- receiving a write request from the host to the plurality of processing nodes in the storage system; and
- writing, by one or more partitioning applications, data corresponding to the write request to the first storage coupled to the first logical processing unit and the second storage coupled to the second logical processing unit.
18. The computer readable storage medium of claim 15, wherein the partitioning and grouping are performed by one or more partitioning applications coupled to the plurality of processing nodes, wherein the one or more partitioning applications comprise a hypervisor application of a redundant system.
19. The computer readable storage medium of claim 15, wherein an administrative console is coupled to the plurality of processing nodes of the storage system, and wherein prior to partitioning, processing is performed at the administrative console of information on processing requirements, memory requirements and host bus adapter requirements for the plurality of logical processing units, wherein a plurality of partitioning applications are coupled to the plurality of logical processing units, the operations further comprising:
- in response to grouping the at least two logical processing units, starting initial program load of the first logical processing unit;
- determining via the plurality of partitioning applications an identification of the second logical processing unit grouped with the first logical processed unit; and
- presenting, by the plurality of partitioning applications, common resources to the first and second logical processing units, wherein a first partitioning application of the plurality of partitioning applications is present on a first processing node of the plurality of processing nodes and a second partitioning application of the plurality of partitioning applications is present on a second processing node of the plurality of processing nodes.
20. The computer readable storage medium of claim 19, wherein:
- the logical processing units are virtual machines;
- defining a plurality of virtual machines for each processing node of the plurality of processing nodes;
- associating a set of pool numbers to a first set of virtual machines in the first processing node of the plurality of processing nodes;
- associating the set of pool numbers to a second set of virtual machines in the second processing node of the plurality of processing nodes;
- assigning those virtual machines that are in different processing nodes but have the same pool number to be partner virtual machines.
21. The computer readable storage medium of claim 20, the operations further comprising:
- receiving a read request from the host to the plurality of processing nodes in the storage system; and
- reading, by one or more partitioning applications, data corresponding to the read request from the first storage coupled to the first logical processing unit, wherein the first logical processing unit is a primary virtual machine.
3889237 | June 1975 | Alferness et al. |
5072373 | December 10, 1991 | Dann |
5659786 | August 19, 1997 | George et al. |
5784702 | July 21, 1998 | Greenstein et al. |
5987566 | November 16, 1999 | Vishlitzky et al. |
6223202 | April 24, 2001 | Bayeh |
6247109 | June 12, 2001 | Kleinsorge et al. |
6260068 | July 10, 2001 | Zalewski et al. |
6279098 | August 21, 2001 | Bauman et al. |
6366945 | April 2, 2002 | Fong et al. |
6438573 | August 20, 2002 | Nilsen |
6453392 | September 17, 2002 | Flynn, Jr. |
6463573 | October 8, 2002 | Maddalozzo et al. |
6530035 | March 4, 2003 | Bridge |
6542975 | April 1, 2003 | Evers et al. |
6625638 | September 23, 2003 | Kubala et al. |
6665759 | December 16, 2003 | Dawkins et al. |
6694419 | February 17, 2004 | Schnee et al. |
6901537 | May 31, 2005 | Dawkins et al. |
20020046358 | April 18, 2002 | Terzioglu |
20020156612 | October 24, 2002 | Schulter et al. |
20030009551 | January 9, 2003 | Benfield et al. |
20030084241 | May 1, 2003 | Lubbers et al. |
20030140069 | July 24, 2003 | Bobak |
20040236987 | November 25, 2004 | Greenspan et al. |
Type: Grant
Filed: Jan 12, 2007
Date of Patent: Jan 26, 2010
Patent Publication Number: 20070180301
Assignee: International Business Machines Corporation (Armonk, NY)
Inventors: Yu-Cheng Hsu (Tucson, AZ), Richard Anthony Ripberger (Tucson, AZ)
Primary Examiner: Robert Beausolel
Assistant Examiner: Amine Riad
Attorney: Konrad Raynes & Victor LLP
Application Number: 11/622,961
International Classification: G06F 11/00 (20060101);