System and method for collaborative caching in a multinode system

- PolyServe, Inc.

A system and method are disclosed for accessing data in a multi-node system comprising providing a first node associated with a first operating system; providing a second node associated with a second operating system, wherein the second operating system is independent of the first operating system; providing a storage, wherein the first node directly accesses the storage and the second node directly accesses the storage; requesting a lock for a block by the first node to the second node; obtaining the lock from the second node; and obtaining the block the from the second node.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Provisional Patent Application No. 60/324,196 (Attorney Docket No. POLYP001+) entitled SHARED STORAGE LOCK: A NEW SOFTWARE SYNCHRONIZATION MECHANISM FOR ENFORCING MUTUAL EXCLUSION AMONG MULTIPLE NEGOTIATORS filed Sep. 21, 2001, which is incorporated herein by reference for all purposes.

[0002] This application claims priority to U.S. Provisional Patent Application No. 60/324,226 (Attorney Docket No. POLYP002+) entitled JOUNALING MECHANISM WITH EFFICIENT, SELECTIVE RECOVERY FOR MULTI-NODE ENVIRONMENTS filed Sep. 21, 2001, which is incorporated herein by reference for all purposes.

[0003] This application claims priority to U.S. Provisional Patent Application No. 60/324,224 (Attorney Docket No. POLYP003+) entitled COLLABORATIVE CACHING IN A MULTI-NODE FILESYSTEM filed Sep. 21, 2001, which is incorporated herein by reference for all purposes.

[0004] This application claims priority to U.S. Provisional Patent Application No 60/324,242 (Attorney Docket No. POLYP005+) entitled DISTRIBUTED MANAGEMENT OF A STORAGE AREA NETWORK filed Sep. 21, 2001, which is incorporated herein by reference for all purposes.

[0005] This application claims priority to U.S. Provisional Patent Application No. 60/324,195 (Attorney Docket No. POLYP006+) entitled METHOD FOR IMPLEMENTING JOURNALING AND DISTRIBUTED LOCK MANAGEMENT filed Sep. 21, 2001, which is incorporated herein by reference for all purposes.

[0006] This application claims priority to U.S. Provisional Patent Application No. 60/324,243 (Attorney Docket No. POLYP007+) entitled MATRIX SERVER: A HIGHLY AVAILABLE MATRIX PROCESSING SYSTEM WITH COHERENT SHARED FILE STORAGE filed Sep. 21, 2001, which is incorporated herein by reference for all purposes.

[0007] This application claims priority to U.S. Provisional Patent Application No. 60/324,787 (Attorney Docket No. POLYP008+) entitled A METHOD FOR EFFICIENT ON-LINE LOCK RECOVERY IN A HIGHLY AVAILABLE MATRIX PROCESSING SYSTEM filed Sep. 24, 2001, which is incorporated herein by reference for all purposes.

[0008] This application claims priority to U.S. Provisional Patent Application No. 60/327,191 (Attorney Docket No. POLYP009+) entitled FAST LOCK RECOVERY: A METHOD FOR EFFICIENT ON-LINE LOCK RECOVERY IN A HIGHLY AVAILABLE MATRIX PROCESSING SYSTEM filed Oct. 1, 2001, which is incorporated herein by reference for all purposes.

[0009] This application is related to co-pending U.S. patent application Ser. No. ______(Attorney Docket No.POLYP001) entitled A SYSTEM AND METHOD FOR SYNCHRONIZATION FOR ENFORCING MUTUAL EXCLUSION AMONG MULTIPLE NEGOTIATORS filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. patent application Ser. No. ______ (Attorney Docket No. POLYP002) entitled SYSTEM AND METHOD FOR JOURNAL RECOVERY FOR MULTINODE ENVIRONMENTS filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. patent application Ser. No. ______(Attorney Docket No. POLYP005) entitled A SYSTEM AND METHOD FOR MANAGEMENT OF A STORAGE AREA NETWORK filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. patent application Ser. No. ______(Attorney Docket No. POLYP006) entitled SYSTEM AND METHOD FOR IMPLEMENTING JOURNALING IN A MULTI-NODE ENVIRONMENT filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. patent application Ser. No. ______(Attorney Docket No. POLYP007) entitled A SYSTEM AND METHOD FOR A MULTI-NODE ENVIRONMENT WITH SHARED STORAGE filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. patent application Ser. No. ______(Attorney Docket No. POLYP009) entitled A SYSTEM AND METHOD FOR EFFICIENT LOCK RECOVERY filed concurrently herewith, which is incorporated herein by reference for all purposes.

FIELD OF THE INVENTION

[0010] The present invention relates generally to computer systems. More specifically, a system and method for collaborative caching in a multi-node file system is disclosed.

BACKGROUND OF THE INVENTION

[0011] In today's complex network systems, multiple nodes may be set up to share data storage. Preferably, in order to share storage only one node or application is allowed to alter data at any given time. In order to accomplish this synchronization, a lock may be used.

[0012] Typically, it can be slow for a node to read or write to a particular block in a shared storage system due to the time it can take to coordinate the locking mechanism and retrieval time of the document from shared storage.

[0013] It would be desirable to speed up the time required to obtain access to a shared document. The present invention addresses such a need.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:

[0015] FIG. 1 is a block diagram of a system for accessing data according to an embodiment of the present invention.

[0016] FIG. 2 is another block diagram of a system according to an embodiment of the present invention.

[0017] FIG. 3 is a block diagram of software components inside a node according to an embodiment of the present invention.

[0018] FIGS. 4A-4B show a flow diagram for a method according to an embodiment of the present invention for accessing data.

[0019] FIGS. 5A-5E show another flow diagram of a method according to an embodiment of the present invention for accessing data.

[0020] FIG. 6 is another block diagram of the software components of server 300 according to an embodiment of the present invention.

DETAILED DESCRIPTION

[0021] It should be appreciated that the present invention can be implemented in numerous ways, including as a process, an apparatus, a system, or a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. It should be noted that the order of the steps of disclosed processes may be altered within the scope of the invention.

[0022] A detailed description of one or more preferred embodiments of the invention is provided below along with accompanying figures that illustrate by way of example the principles of the invention. While the invention is described in connection with such embodiments, it should be understood that the invention is not limited to any embodiment. On the contrary, the scope of the invention is limited only by the appended claims and the invention encompasses numerous alternatives, modifications and equivalents. For the purpose of example, numerous specific details are set forth in the following description in order to provide a thorough understanding of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.

[0023] FIG. 1 is a block diagram of a system for accessing data according to an embodiment of the present invention. FIG. 3 is a block diagram of a system for a multi- node environment according to an embodiment of the present invention. In this example, servers 300A-300D are coupled via network interconnects 302. The network interconnects 302 can represent any network infrastructure such as an Ethernet, InfiniBand network or Fibre Channel network capable of host-to-host communication. The servers 300A-300D are also coupled to the data storage interconnect 304, which in turn is coupled to shared storage 306A-306D. The data storage interconnect 304 can be any interconnect that can allow access to the shared storage 306A-306D by servers 300A-300D. An example of the data storage interconnect 304 is a Fibre Channel switch, such as a Brocade 3200 Fibre Channel switch. Alternately, the data storage network might be an iSCSI or other IP storage network, InfiniBand network, or another kind of host-to-storage network. In addition, the network interconnects 302 and the data storage interconnect 304 may be embodied in a single interconnect.

[0024] Servers 300A-300D can be any computer, preferable an off-the-shelf computer or server or any equivalent thereof. Servers 300A-300D can each run operating systems that are independent of each other. Accordingly, each server 300A-300D can, but does not need to, run a different operating system. For example, server 300A may run Microsoft windows, while server 300B runs Linux, and server 300C can simultaneously run a Unix operating system. An advantage of running independent operating systems for the servers 300A-300D is that the entire multi-node system can be dynamic. For example, one of the servers 300A-300D can fail while the other servers 300A-300D continue to operate.

[0025] The shared storage 306A-306D can be any storage device, such as hard drive disks, compact disks, tape, and random access memory. A filesystem is a logical entity built on the shared storage. Although the shared storage 306A-306D is typically considered a physical device while the filesystem is typically considered a logical structure overlaid on part of the storage, the filesystem is sometimes referred to herein as shared storage for simplicity. For example, when it is stated that shared storage fails, it can be a failure of a part of a filesystem, one or more filesystems, or the physical storage device on which the filesystem is overlaid. Accordingly, shared storage, as used herein, can mean the physical storage device, a portion of a filesystem, a filesystem, filesystems, or any combination thereof.

[0026] FIG. 2 is another block diagram of a system according to an embodiment of the present invention. In this example, the system preferably has no single point of failure. Accordingly, servers 300A′-300D′ are coupled with multiple network interconnects 302A-302D. The servers 300A′-300D′ are also shown to be coupled with multiple storage interconnects 304A-304B. The storage interconnects 304A-304B are each coupled to a plurality of data storage 306A′-306D′.

[0027] In this manner, there are redundancies in the system such that if any of the components or connections fail, the entire system can continue to operate.

[0028] In the example shown in FIG. 2, as well as the example shown in FIG. 1, the number of servers 300A′-300D′, the number of storage interconnects 304A-304B, and the number of data storage 306A′-306D′ can be as many as the customer requires and is not physically limited by the system. Likewise, the operating systems used by servers 300A′-300D′ can also be as many independent operating systems as the customer requires.

[0029] FIG. 3 is a block diagram of software components inside a node 300. In this example, node 300 is shown to include a buffer cache 350, processes 352, a distributed lock manager (DLM) 354, and a lock caching layer (LCL) 356. According to an embodiment of the present invention, a block is kept in the node's cache (in local storage) after node 300 changes the block rather than writing it immediately into the shared storage. In this manner, it is faster if that node 300 can find the latest document in its own buffer cache 350 rather than taking the time to access the shared storage.

[0030] There are various ways to keep a node from using a stale copy of a block. One way is to invalidate the cached copy of the block associated with a lock when the lock is released. Another way is to invalidate or refresh the cached copy of the block associated with a new lock when a new lock is obtained.

[0031] The distributed lock manager communicates with other DLMs in other nodes and also communicates with the lock caching layer 356. The lock caching layer 356 calls requested tasks before a lock is downgraded or released.

[0032] A process 352, such as an application or a file system, can obtain a lock on a block via the lock caching layer 356, use it, then eventually relinquish the lock on the block. The block is then stored in buffer cache 350. The next time a process 352 requests that block, a search can be performed in the buffer cache 350 to find that block. If the block is not found in the buffer cache, then it can be retrieved from the shared storage.

[0033] FIGS. 4A-4B show a flow diagram for a method according to an embodiment of the present invention for accessing data. In this example, a process within a particular node requests the lock caching layer (LCL) for a write lock for a document (400). The LCL obtains a distributed lock manager (DLM) lock for that document (402). The LCL grants the LCL lock to the process for that document (404). When the process is finished and relinquishes the LCL lock, the LCL caches the DLM lock (406). In this example, 400-406 occur within a single node. Another node then requests a read lock on the document and the request is received by this node's DLM (408). The DLM asks the LCL to downgrade the DLM lock (450 of FIG. 4B). The LCL then determines that there are no local processes using the lock and writes the document to shared storage (452). The LCL informs the DLM that it is down grading the lock from write to read (454). The DLM then passes the lock as well as the latest version of the document to the requesting node (456).

[0034] Accordingly, by sending the requesting document directly from one node to the other, access to this data is more efficient then having to retrieve it from the shared storage.

[0035] FIGS. 5A-5E show another flow diagram of a method according to an embodiment of the present invention for accessing data. In the example shown in FIGS. 5A-5C the example of a requesting node requesting a shared lock is used. Variations of this example can be used to accommodate other types of locks, such as an exclusive lock or a lock with a different level of exclusion.

[0036] In this example, the requesting node asks its DLM for a shared lock (500). It is determined whether the requesting node is the home node (502). A lock home node, as used herein, is the server that is responsible for granting or denying lock requests for a given DLM lock when there is no cached lock reference available on the requesting node. In this embodiment, there is one lock home node per lock. The home node does not necessarily hold the lock locked but if other nodes hold the lock locked or cached, then the home node has a description of the lock since the other nodes that holds the lock locked or cached communicated with the home node in order to get it locked or cached.

[0037] If the requesting node is not the home node, then the DLM of the requesting node requests a shared lock from the home node (504). It is also determined whether a lock is held by a node other than the requesting node (506). If a lock is held by a node other than the requesting node, the home node then gives the requesting node the lock in shared mode (508). The requesting node then reads the content from shared storage (510).

[0038] If the requesting node is the home node (502), then it is determined whether a lock is held by another node (550). If a lock is not held by another node, then the requesting node obtains the lock and reads from shared storage (562). If, however, there is a lock held by another node, then it is also determined whether the other node holds a shared lock (552). If the other node holds a shared lock, then the requesting node grants itself a shared lock (563) and sends a request for content to the owner of the shared lock (564).

[0039] It is then determined whether the owner has the content in its local cache (580). If yes, the owner of the shared lock sends the content to the requesting node (586), otherwise the owner tells the requesting node that it does not have the content (582) and the requesting node reads the content from shared storage (584). If the other node does not hold a shared lock (552), and instead holds an exclusive lock, then the requesting node sends a request for the downgrade of the lock and content to the owner of the exclusive lock (554).

[0040] Then, it is determined whether the owner has the content in the local cache (590 of FIG. 5C). If the owner has the content in the local cache, the owner writes the content to shared storage (558). The owner then sends the message to the home node (the requestor) with the content and the downgrade request (560). The requesting node then grants itself a shared lock (596).

[0041] If the owner does not have the content in the local cache, it sends the downgrade message to the requesting node (592). The requesting node then grants itself a shared lock and reads the content from shared storage (594).

[0042] If it is determined that a lock is held by a node other than the requesting node (506 of FIG. 5A), then it is also determined whether the held lock is a shared lock (600 of FIG. 5D). If it is a shared lock, then it is also determined whether the home node holds the lock (602). If the home node holds the lock, then it sends the lock as well as the content to the requester (608).

[0043] If the home node does not hold the lock (602), it then sends the content request to the lock holder (612). The content is sent from the lock holder to the home node (614). The home node sends the lock as well as the content to the requester (616).

[0044] If the lock held by another node is not a shared lock (600), for example, it's an exclusive lock, then it is determined whether the home node holds the lock (650 of FIG. 5E). If the home node holds the lock, it then writes the content to the shared storage (654). The home node downgrades the exclusive lock to shared and send the shared lock to the requester along with content if known (656).

[0045] If the home node does not hold the lock (650), it then sends the request for downgrade and content to the owner of the lock (660). The owner of the lock writes the content to shared storage (662). The owner of the lock then sends the content and a message that it is down grading from exclusive lock to shared lock to the home node (664). The home node sends the lock and the content to the requester (666).

[0046] It should be noted that in steps 616, 608, 656 and 666, the home node sends the content to the requester if the home node has the content in its cache. If, however, the home node does not have the content in its cache, it then notifies the requester that it does not have the content in the cache and the requester retrieves the content from the shared storage. In another embodiment of the present invention, the nodes can access information directly amongst each other, without regularly writing to the shared storage. Accordingly, FIGS. 5A-5E still applies to this embodiment except that it would be modified to delete 558 of FIG. 5C, 654 of FIG. 5E, and 662 of FIG. 5E.

[0047] If the requesting node requests an exclusive lock in 500 of FIG. 5A, rather than a shared lock, then 566 of FIG. 5B would change to “owner of shared lock sends content to requesting node and also gives up the lock to the requesting node”. Likewise, 560 would also change from “downgrading its lock” to “giving up its lock”. 614 of FIG. 5C would add that “the owner of the lock gives up the lock to the requester”. And 664 of FIG. 5D would also change from “downgrade” to “give up its lock”. FIG. 6 is another block diagram of the software components of server 300 according to an embodiment of the present invention. In an embodiment of the present invention, each server 300A-300D of FIG. 1 includes these software components.

[0048] In this embodiment, the following components are shown:

[0049] The Distributed Lock Manager (DLM) 1500 manages matrix-wide locks for the filesystem image 306a-306d, including the management of lock state during crash recovery. The Matrix Filesystem 1504 uses DLM 1500-managed locks to implement matrix-wide mutual exclusion and matrix-wide filesystem 306a-306d metadata and data cache consistency. The DLM 1500 is a distributed symmetric lock manager. Preferably, there is an instance of the DLM 1500 resident on every server in the matrix. Every instance is a peer to every other instance; there is no master/slave relationship among the instances.

[0050] The lock-caching layer (“LCL”) 1502 is a component internal to the operating system kernel that interfaces between the Matrix Filesystem 1504 and the application-level DLM 1500. The purposes of the LCL 1502 include the following:

[0051] 1. It hides the details of the DLM 1500 from kernel-resident clients that need to obtain distributed locks.

[0052] 2. It caches DLM 1500 locks (that is, it may hold on to DLM 1500 locks after clients have released all references to them), sometimes obviating the need for kernel components to communicate with an application-level process (the DLM 1500) to obtain matrix-wide locks.

[0053] 3. It provides the ability to obtain locks in both process and server scopes (where a process lock ensures that the corresponding DLM (1500) lock is held, and also excludes local processes attempting to obtain the lock in conflicting modes, whereas a server lock only ensures that the DLM (1500) lock is held, without excluding other local processes).

[0054] 4. It allows clients to define callouts for different types of locks when certain events related to locks occur, particularly the acquisition and surrender of DLM 1500-level locks. This ability is a requirement for cache-coherency, which depends on callouts to flush modified cached data to permanent storage when corresponding DLM 1500 write locks are downgraded or released, and to purge cached data when DLM 1500 read locks are released.

[0055] The LCL 1502 is the only kernel component that makes lock requests from the user-level DLM 1500. It partitions DLM 1500 locks among kernel clients, so that a single DLM 1500 lock has at most one kernel client on each node, namely, the LCL 1502 itself. Each DLM 1500 lock is the product of an LCL 1502 request, which was induced by a client's request of an LCL 1502 lock, and each LCL 1502 lock is backed by a DLM 1500 lock.

[0056] The Matrix Filesystem 1504 is the shared filesystem component of The Matrix Server. The Matrix Filesystem 1504 allows multiple servers to simultaneously mount, in read/write mode, filesystems living on physically shared storage devices 306a-306d. The Matrix Filesystem 1504 is a distributed symmetric matrixed filesystem; there is no single server that filesystem activity must pass through to perform filesystem activities. The Matrix Filesystem 1504 provides normal local filesystem semantics and interfaces for clients of the filesystem.

[0057] SAN (Storage Area Network) Membership Service 1506 provides the group membership services infrastructure for the Matrix Filesystem 1504, including managing filesystem membership, health monitoring, coordinating mounts and unmounts of shared filesystems 306a-306d, and coordinating crash recovery.

[0058] Matrix Membership Service 1508 provides the Local, matrix-style matrix membership support, including virtual host management, service monitoring, notification services, data replication, etc. The Matrix Filesystem 1504 does not interface directly with the MMS 1508, but the Matrix Filesystem 1504 does interface with the SAN Membership Service 1506, which interfaces with the MMS 1508 in order to provide the filesystem 1504 with the matrix group services infrastructure.

[0059] The Shared Disk Monitor Probe 1510 maintains and monitors the membership of the various shared storage devices in the matrix. It acquires and maintains leases on the various shared storage devices in the matrix as a protection against rogue server “split-brain” conditions. It communicates with the SMS 1506 to coordinate recovery activities on occurrence of a device membership transition.

[0060] Filesystem monitors 1512 are used by the SAN Membership Service 1508 to initiate Matrix Filesystem 1504 mounts and unmounts, according to the matrix configuration put in place by the Matrix Server user interface.

[0061] The Service Monitor 1514 tracks the state (health & availability) of various services on each server in the matrix so that the matrix server may take automatic remedial action when the state of any monitored service transitions. Services monitored include HTTP, FTP, Telnet, SMTP, etc. The remedial actions include service restart on the same server or service fail-over and restart on another server.

[0062] The Device Monitor 1516 tracks the state (health & availability) of various storage-related devices in the matrix so that the matrix server may take automatic remedial action when the state of any monitored device transitions. Devices monitored may include data storage devices 306a-306d (such as storage device drives, solid state storage devices, ram storage devices, JOBDs, RAID arrays, etc.)and storage network devices 304′ (such as FibreChannel Switches, Infiniband Switches, iSCSI switches, etc.). The remedial actions include initiation of Matrix Filesystem 1504 recovery, storage network path failover, and device reset.

[0063] The Application Monitor 1518 tracks the state (health & availability) of various applications on each server in the matrix so that the matrix server may take automatic remedial action when the state of any monitored application transitions. Applications monitored may include databases, mail routers, CRM apps, etc. The remedial actions include application restart on the same server or application fail-over and restart on another server.

[0064] The Notifier Agent 1520 tracks events associated with specified objects in the matrix and executes supplied scripts of commands on occurrence of any tracked event.

[0065] The Replicator Agent 1522 monitors the content of any filesystem subtree and periodically replicates any data which has not yet been replicated from a source tree to a destination tree.

[0066] The Matrix Communication Service 1524 provides the network communication infrastructure for the DLM 1500, Matrix Membership Service 1508, and SAN Membership Service 1506. The Matrix Filesystem 1504 does not use the MCS 1524 directly, but it does use it indirectly through these other components.

[0067] The Storage Control Layber (SCL) 1526 provides matrix-wide device identification, used to identify the Matrix Filesystems 1504 at mount time. The SCL 1526 also manages storage fabric configuration and low level I/O device fencing of rogue servers from the shared storage devices 306a-306d containing the Matrix Filesystems 1504. It also provides the ability for a server in the matrix to voluntarily intercede during normal device operations to fence itself when communication with rest of the matrix has been lost.

[0068] The Storage Control Layer 1526 is the Matrix Server module responsible for managing shared storage devices 306a-306d. Management in this context consists of two primary functions. The first is to enforce I/O fencing at the hardware SAN level by enabling/disabling host access to the set of shared storage devices 306a-306d. And the second is to generate global(matrix-wide) unique device names (or “labels”) for all matrix storage devices 306a-306d and ensure that all hosts in the matrix have access to those global device names. The SCL module also includes utilities and library routines needed to provide device information to the UI.

[0069] The Pseudo Storage Driver 1528 is a layered driver that “hides” a target storage device 306a-306d so that all references to the underlying target device must pass through the PSD layered driver. Thus, the PSD provides the ability to “fence” a device, blocking all I/O from the host server to the underlying target device until it is unfenced again. The PSD also provides an application-level interface to lock a storage partition across the matrix. It also has the ability to provide common matrix-wide ‘handles’, or paths, to devices such that all servers accessing shared storage in the Matrix Server can use the same path to access a given shared device.

[0070] Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. It should be noted that there are many alternative ways of implementing both the process and apparatus of the present invention. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims

1. A method of accessing data in a multi-node system comprising:

providing a first node associated with a first operating system;
providing a second node associated with a second operating system, wherein the second operating system is independent of the first operating system;
providing a storage, wherein the first node directly accesses the storage and the second node directly accesses the storage;
requesting a lock for a block by the first node to the second node;
obtaining the lock from the second node; and
obtaining the block the from the second node.

2. The method of claim 1, further comprising caching the block.

3. The method of claim 1, wherein the second node is a home node.

4. The method of claim 1, further comprising writing the block to the storage.

5. The method of claim 1, wherein the first node includes a first lock manager and the second node includes a second lock manager.

6. The method of claim 1, wherein the second node is a home node.

7. A method of accessing data in a node configured for a multi-node environment comprising:

providing a first operating system wherein the first operating system is independent of a second operating system, wherein the second operating system is associated with a second node;
providing a lock manager;
requesting a lock for a block from the second node;
obtaining the lock from the second node; and
obtaining the block from the second node.

8. (not entered)

9. (not entered)

10. A method of accessing data by a first node configured for a multi-node environment comprising:

obtaining a lock for a block from a second node, wherein the first node includes a first operating system and the second node includes a second operating system independent of the first operating system;
altering the block;
writing the block to shared storage;
relinquishing the lock;
caching the block in a local storage.

11. A system of accessing data comprising:

a first node configured to request a lock for a block, wherein the first node includes a first operating system;
a second node configured to receive the request, send the lock and the block to the first node, wherein the second node includes a second operating system independent of the first operating system; and
a storage configured to be accessible by the first and second nodes.

12. A computer program product for accessing data, the computer program product being embodied in a computer readable medium and comprising computer instructions for:

providing a lock manager, wherein the lock manager is configured to work in an environment associated with a first operating system, wherein the first operating system is independent of a second operating system, and wherein the second operating system is associated with a second node;
requesting a lock for a block from the second node;
obtaining the lock from the second node; and
obtaining the block from the second node.
Patent History
Publication number: 20040202013
Type: Application
Filed: Sep 20, 2002
Publication Date: Oct 14, 2004
Applicant: PolyServe, Inc.
Inventors: Kenneth F. Dove (Banks, OR), Brent A. Kingsbury (Beaverton, OR), Sam Revitch (Portland, OR), Terence M. Rokop (Hillsboro, OR)
Application Number: 10251645
Classifications
Current U.S. Class: Flip-flop (electrical) (365/154)
International Classification: G11C011/00;