TRANSMITTING FILESYSTEM CHANGES OVER A NETWORK
Transmitting filesystem changes over a network is disclosed. A hash of data comprising a chunk of directory elements comprising one or more consecutive directory elements in a set of elements sorted in a canonical order is computed at a client system. One or more directory elements comprising the chunk are sent to a remote server in the event it is determined based at least in part on the computed hash that corresponding directory elements as stored on the remote server are not identical to the directory elements comprising the chunk as stored on the client system.
This application is a continuation of co-pending U.S. patent application Ser. No. 12/895,827, entitled TRANSMITTING FILESYSTEM CHANGES OVER A NETWORK filed Sep. 30, 2010 which is incorporated herein by reference for all purposes.
BACKGROUND OF THE INVENTIONTechniques are known to synchronize a remote (“server”) filesystem with a local (“client”) filesystem across a network. The remote filesystem may be a near replica of the local filesystem; for instance, it may represent a recent backup of the local filesystem. To synchronize the remote filesystem with the local filesystem, for example to reflect any changes made to the local filesystem since a last synchronization, it is necessary to update the remote filesystem's structure, namespace, and metadata.
A typical “full” synchronization approach uses maximum network bandwidth but no additional local storage to synchronize the filesystem's structure, namespace, and metadata. The modification time (“mtime”) and size of every file on the local filesystem is compared with the mtime and size of the file on the server. If the file does not exist or its mtime and/or size are different on the server, the client creates the file on the server and synchronizes the file content. The client also updates any other metadata (user ID (“UID”), group ID (“GID”), file permissions, etc.) associated with the file. Any files not specified by the client are deleted by the server. The popular utility “rsync” uses this approach.
A typical “incremental” synchronization approach uses less network bandwidth but some local storage. After a full synchronization, the client stores the current filesystem structure, namespace, and metadata in a “catalog” (typically a database). During an incremental synchronization, for every file, the client queries the catalog database first. If the file is not represented in the catalog or its mtime and/or size are different, the client creates the file on the server and synchronizes the file content. If the file is represented in the catalog and its mtime and size are the same, then its content is assumed to be unchanged, and the client just updates any other metadata associated with the file, if different than represented in the catalog. The client deletes any files on the server that are represented in the catalog but no longer exist on the local filesystem.
Another incremental approach uses about the same amount of network bandwidth but (usually) less local storage. Every operation on every file since the last backup is recorded in a filesystem “journal”. To synchronize the remote filesystem, the journal is essentially played back like a recording. This eliminates the need to store the metadata for every file (since most files never change), but is more complicated and prone to synchronization error.
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
Transmitting filesystem changes across a network is disclosed. A local filesystem is replicated across a network, e.g., from a backup client system (which may itself be a production server, such as a file server) to a remote server. Synchronization of filesystem information, e.g., directory structure and other metadata, is maintained. In some embodiments, at the backup or other client to synchronize the filesystem information or a portion thereof, a canonical representation of metadata for each node (for example, each file in a directory) is created and the resulting directory (or other) elements are sorted in a canonical order, for example by file ID. The resulting sorted list of directory elements is divided into chunks. Data comprising each chunk is hashed and the result is compared to a corresponding value determined with respect to the corresponding directory information as stored on the server. If the information does not match the associated directory elements are sent to the server. If the information matches, in some embodiments a hash over a span of chunks is compared to a corresponding value determined based on corresponding filesystem information at the server, to ensure the server does not still have stored obsolete information that happens to fall between chunks, e.g., for a node or range of nodes that have been deleted at the client. In some embodiments, once a chunk (or in some embodiments span) hash has been determined to match, the value is stored in a hash cache at the client to avoid having to communicate with and use processing cycles at the server in the event the associated directory elements remain unchanged between synchronizations. In some embodiments, once chunks comprising an entire directory have been determined to be in sync a hash of the sorted list of directory elements comprising the entire directory is computed and stored in the hash cache, as a further optimization with respect to the commonly encountered case in which some or even many entire directories remain unchanged between synchronizations.
In various embodiments, the approach described herein uses about the same amount of network bandwidth as, but less local storage than, the typical incremental approach, and is not as prone to error as the journaling approach. Instead of storing the full metadata for every file in a catalog, this method only stores a mapping of <file path, mtime, size> to <file ID> in a “file cache”. Like a catalog, the file cache allows the client to detect which files are unchanged since the last backup. If a file is found in the file cache, then the file content is unchanged on the server and only the metadata (e.g. UID, GID, permissions, etc.) may need to be updated. To minimize local storage, however, the metadata is not stored in the file cache. Instead, metadata is simply assumed to change very rarely and is just verified.
Once all of the files in a directory have been either created (if new), updated (if changed), or found in the file cache (if unchanged), their associated metadata is read from the filesystem. A canonical representation of each file (a “DIRELEM”) is formed from its metadata. The list of DIRELEMs in the directory is sorted by file ID and broken up into chunks. Each chunk is then “hashed” (or “fingerprinted”), and a “hash cache” is consulted to determine if the files represented by the chunk exist identically on the server.
If the hash of a chunk is not found in the hash cache, then it is unknown whether the files represented by the chunk exist on the server. The hash of the chunk, the offset of the first file represented by the chunk, and the number of files represented by the chunk, are sent to the server for verification. The server forms the same canonical representation of every file in the directory to be synchronized, and also sorts the list of DIRELEMs by file ID. The server calculates a hash over the DIRELEMs at the specified offset and compares it to the hash provided by the client. If the hashes are the same, then the files do not need to be updated. If the hashes are different, the client then sends the DIRELEMs for the specified chunk so that the server can update the metadata for the files, and/or delete files that no longer exist within the span of file IDs represented by the chunk. The client then adds the hash of the chunk to the hash cache.
If the hash of a chunk is found in the hash cache, then it is certain that the files represented by the chunk exist on the server, but the server may still need to delete old files that exist between the files represented by the chunk and those represented by the preceding chunk (if any). Thus when the hash of a chunk is found in the hash cache, the chunk is added to a “span”, or running list of chunks that all exist on the server. If the hash of a subsequent chunk is not found in the hash cache, the span is synchronized with the server before the chunk is. As an optimization, if the end of the directory is reached, and the span covers all files in the directory, the hash of the span may be added to the hash cache (normally only hashes of chunks would be added), so that the server does not need to be consulted at all for the common case of an entire directory remaining unchanged between backups.
Referring further to
At the server, the information corresponding to the directory as stored at the server is sorted in the same canonical order as at the client. The server is configured to receive an offset and hash; compute a hash for corresponding records in the ordered list of directory elements as stored on the server; and return to the client a result indicating whether the computation at the server matched the hash value sent by the client.
At the client, if the response from the server indicates the hashes matched, the client adds the hash to the hash cache (if present) and processes the next chunk. If the response from the server indicates the hashes did not match, the client sends the directory elements comprising the chunk to the server and adds the hash to the hash cache (if present). In some embodiments, a previously generated span of consecutive chunks found in a hash cache, if any, also is synchronized, to detect any case in which an orphaned element or elements deleted from the local filesystem may remain in the filesystem information on the server but not otherwise be detected because they lie between chunks.
In the example shown in
In the event a “no match” result is returned, in various embodiments the client may send the associated directory elements. The server is configured to receive the elements and use them to update corresponding information stored at the server.
In some embodiments, in a subsequent synchronization the client prior to dividing the directory elements of a directory into chunks and proceeding as described above first computes a hash over the entire ordered list of directory elements and checks the hash cache for the result. If the hash over the whole directory is in the hash cache, the client knows the directory as stored on the server is in sync and moves on to the next directory, if any. Otherwise, the client chunks and processes the sorted list as described above.
In some embodiments, neither the file cache nor the hash cache needs to be comprehensive; that is, not all files on the client filesystem nor all chunks of metadata need to be represented if local storage is at a premium. If a file is not found in the file cache, or the hash of a chunk of metadata is not found in the hash cache, because one or both are space limited, then the file or chunk will just be resent. This allows for flexibility in trading off local storage for bandwidth savings while remaining correct. By contrast, the journaling approach requires that all operations since the last backup be saved in order to remain correct.
The file and hash caches essentially implement a very space efficient and flexible catalog system. The techniques and protocol described herein in various embodiments allow the file and hash caches to be used to synchronize a multitude of local filesystem implementations (NTFS, ext3, etc.) with a multitude of remote filesystem implementations (Avamar FS, Data Domain FS, etc.).
While in a number of embodiments described herein files and associated filesystem information are replicated and/or synchronized across a network, in other embodiments techniques described herein may be used in other environments, e.g., backup to a local backup or other storage node. In addition, while files and filesystem metadata are described in various embodiments, techniques described herein may be used to synchronize metadata about other stored objects stored or represented in a hierarchical or other manner.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
Claims
1. A method of synchronizing filesystem changes, comprising:
- sorting a list of directory elements, wherein each directory element includes a representation of metadata associated with a node in the filesystem;
- determining a chunk of directory elements comprising one or more consecutive directory elements by breaking up the sorted list of directory elements;
- computing at a client system a hash of data comprising the chunk of directory elements;
- determining based at least in part on the computed hash whether the corresponding directory elements as stored on the remote server match the directory elements comprising the chunk as stored on the client system; and
- in the event that the corresponding directory elements as stored on the remote server does not match the directory elements comprising the chunk as stored on the client system, sending, to the remote server, the computed hash and information from which the remote server computes a verification hash of data comprising one or more directory elements as is stored at the remote server; determining whether the verification hash matches the computed hash; and in the event that the verification hash does not match the computed hash, synchronizing the one or more directory elements comprising the chunk as stored on the client system with one or more corresponding directory elements as stored at the remote server.
2. The method of claim 1, wherein at least a portion of the one or more directory elements comprising the chunk that are sent to the remote server correspond to at least a subset of changes in data stored at the client system since a previous synchronization between the remote server and the client system.
3. The method of claim 1, further comprising receiving from the remote server an indication whether a corresponding hash corresponding to the computed hash and computed by the server based on corresponding filesystem information stored on the server matched the computed hash.
4. The method of claim 3, further comprising adding the computed hash to a hash cache if the response from the server indicates that the corresponding hash matched the computed hash.
5. The method of claim 4, further comprising adding the computed hash to the hash cache once the one or more directory elements comprising the chunk have been sent to the remote server in response to receiving from the server an indication that the corresponding hash computed at the server did not match the computed hash sent by the client system.
6. The method of claim 1, further comprising adding the computed hash to a span of consecutive chunks, if any, the respective hashes of which have been found in the hash cache, in the event the computed hash is found in a hash cache.
7. The method of claim 6, further comprising synchronizing the span and the chunk if the computed hash is not found in the hash cache.
8. The method of claim 7, wherein synchronizing the span includes sending to the remote server a hash of directory elements comprising chunks included in the span.
9. The method of claim 8, further comprising sending directory elements comprising chunks included in the span to the remote server in the event an indication is received that a corresponding hash computed based on directory elements stored on the server that correspond is to elements comprising the chunks included in the span as stored on the client system did not match the hash of directory elements comprising chunks included in the span sent by the client system.
10. The method of claim 6, further comprising:
- determining upon reaching an end of the set of elements that the span covers the entire set of elements; and
- ensuring that a hash of the entire set of elements is stored in the hash cache.
11. The method of claim 10, further comprising checking at the outset of a synchronization of the set of elements whether the hash of the entire set of elements is stored in the hash cache; and concluding without further processing that the set of elements are in sync between the client system and the remote server if the hash of the entire set of elements is found in the hash cache.
12. The method of claim 10, further comprising synchronizing the span if upon reaching the end of the set of elements it is determined that the span does not cover the entire set of elements.
13. The method of claim 1, wherein each of the directory elements comprises a canonical representation of metadata comprising filesystem information associated with a corresponding file in a directory with which the directory elements are associated.
14. The method of claim 13, further comprising generating the respective canonical representations.
15. The method of claim 14, further comprising sorting the directory elements in the canonical order.
16. The method of claim 1, wherein the information from which the server computes the verification hash comprises an offset of a first file represented by the chunk associated with the hash, and a number of files represented by the chunk associated with the hash.
17. The method of claim 1, wherein the verification hash is computed based at least in part on the information received from the client system and corresponding information stored at the remote server.
18. A computer system, comprising:
- a processor configured to: sort a list of directory elements wherein each directory element includes a is representation of metadata associated with a node in the filesystem; determine a chunk of directory elements comprising one or more consecutive directory elements by breaking up the sorted list of directory elements; compute a hash of data comprising the chunk of directory elements comprising; determine based at least in part on the computed hash whether the corresponding directory elements as stored on the remote server match the directory elements comprising the chunk as stored on the client system; and in the event that the corresponding directory elements as stored on the remote server does not match the directory elements comprising the chunk as stored on the client system, send, to the remote server, the computed hash and information from which the remote server computes a verification hash of data comprising one or more directory elements as stored at the remote server; determine whether the verification hash matches the computed hash; and in the event that the verification hash does not match the computed hash, synchronizing the one or more directory elements comprising the chunk as stored on the client system with one or more corresponding directory elements as stored at the remote server; and
- a storage device coupled to the processor and configured to store data comprising the directory elements.
19. The system of claim 18, wherein at least a portion of the one or more directory elements comprising the chunk that are sent to the remote server correspond to at least a subset of changes in data stored at the client system since a previous synchronization between the remote server and the client system.
20. The system of claim 18, further comprising a communication interface couple to the processor and configured to be used by the processor to send the one or more directory elements comprising the chunk to the remote server.
21. The system of claim 18, wherein each of the directory elements comprises a canonical representation of metadata comprising filesystem information associated with a corresponding file in a directory with which the directory elements are associated and the processor is configured to generate the canonical representations.
22. The system of claim 18, wherein the processor is configured to sort the directory elements in the canonical order.
23. The system of claim 18, wherein the processor is further configured to add the computed hash to a hash cache if the one or more directory elements comprising the chunk are sent to the remote server.
24. A computer program product for synchronizing filesystem changes, the computer program product being embodied in a non-transitory computer readable storage medium and comprising computer instructions for:
- sorting a list of directory elements wherein each directory element includes a representation of metadata associated with a node in the filesystem;
- determining a chunk of directory elements comprising one or more consecutive directory elements by breaking up the sorted list of directory elements;
- computing at a client system a hash of data comprising a chunk of directory elements comprising one or more consecutive directory elements in a set of elements;
- determining based at least in part on the computed hash whether the corresponding directory elements as stored on the remote server match the directory elements comprising the chunk as stored on the client system; and
- in the event that the corresponding directory elements as stored on the remote server does not match the directory elements comprising the chunk as stored on the client system, sending, to the remote server, the hash and information from which the remote server computes a verification hash; determining whether the verification hash matches the computed hash; and in the event that the verification hash does not match the computed hash, synchronizing the one or more directory elements comprising the chunk as stored on the client system with one or more corresponding directory elements as stored at the remote server.
Type: Application
Filed: Dec 4, 2015
Publication Date: Jun 2, 2016
Patent Grant number: 10417191
Inventors: Mark Huang (Seattle, WA), Curtis Anderson (Saratoga, CA), R. Hugo Patterson (Los Altos, CA)
Application Number: 14/960,244