Abstract: A system and method are provided to manage storage space. The method comprises detecting a free storage space threshold condition for a storage volume and automatically applying a space management technique to achieve a free storage space threshold condition. Space management techniques comprise deleting selected backup data (e.g., persistent consistency point images) and automatically increasing the size of the storage volume.
Abstract: A file system determines the relative vacancy of a collection of storage blocks, i.e., an “allocation area”. This is accomplished by recording an array of numbers, each of which describes the vacancy of a collection of storage blocks. The file system examines these numbers when attempting to record file blocks in relatively contiguous areas on a storage medium, such as a disk. When a request to write to disk occurs, the system determines the average vacancy of all of the allocation areas and queries the allocation areas for individual vacancy rates. The system preferably writes file blocks to the allocation areas that are above a threshold related to the average storage block vacancy of the file system.
Type:
Grant
Filed:
October 3, 2007
Date of Patent:
April 19, 2011
Assignee:
Network Appliance, Inc.
Inventors:
Douglas P. Doucette, Blake Lewis, John K. Edwards
Abstract: A process such as a snapshot creation application process freezes a file system upon initiating a process to create a snapshot of a file system or a part thereof. Upon freezing the file system, the snapshot application process causes a second process to make a change to the file system. If the change is successfully made before the snapshot creation application process tries to thaw the file system, the second process sends a signal back to the snapshot application. Upon receiving a signal from the second process, the snapshot creation application process outputs a warning to a user that the snapshot is inconsistent. The snapshot application also causes a third process to automatically thaw the file system, if a certain period of time has passed and the third process has not received a signal from the snapshot application process indicating the snapshot has been successfully created.
Abstract: A storage system includes a host computer coupled to a device to transfer a DMA descriptor between the host and the device. An integrity manager manages the integrity of the DMA descriptor between the host computer and the device. The integrity manager embeds a host-side DMA descriptor integrity value in the DMA descriptor and the device transfers the DMA descriptor to a device memory. The device generates a device-side DMA descriptor integrity value and compares it to the host-side DMA descriptor integrity value to determine if the descriptor is corrupted.
Abstract: A mirror destination storage server receives mirror update data streams from several mirror source storage servers. Data received from each mirror is cached and periodic checkpoints are queued, but the data is not committed to long-term storage at the mirror destination storage server immediately. Instead, the data remains in cache memory until a trigger event causes the cache to be flushed to a mass storage device. The trigger event is asynchronous with respect to packets of at least one of the data streams. In one embodiment, the trigger event is asynchronous with respect to packets of all of the data streams.
Type:
Grant
Filed:
April 18, 2008
Date of Patent:
April 5, 2011
Assignee:
Network Appliance, Inc.
Inventors:
Shvetima Gulati, Hitesh Sharma, Atul R. Pandit
Abstract: A system and method for fixing data inconsistency between an original dataset stored on a source storage server and a mirror of the original dataset stored on a destination storage server is provided. The method determines whether the mirror is consistent with the original dataset by comparing metadata describing the original dataset with metadata describing the mirror. If the mirror is inconsistent with the original dataset, corresponding block(s) of the original dataset is/are requested and received from the source storage server. The mirror is then fixed according to the received block(s).
Type:
Grant
Filed:
December 20, 2006
Date of Patent:
April 5, 2011
Assignee:
Network Appliance, Inc.
Inventors:
Vikas Yadav, Raghu Arur, Amol R. Chitre
Abstract: A data storage system filter operates through a filter framework in a file system to detect and provide customized responses to unauthorized access attempts. A security event definition determines when file system access events are classified as unauthorized access attempts. A trap manager manages the security events, and causes traps to be installed to capture file system responses. The trapped responses can be replaced with customized data, such as static artificial data, or artificial data generated based on a context of the request and/or response. The security filter can be loaded or unloaded in the filter framework and operates on a callback mechanism to avoid significant disruption of I/O activity.
Abstract: A storage server obtains metadata to describe a filesystem, then processes the metadata to locate a data block and reads the data block from a remote storage subsystem. Apparatus and software implementing embodiments of the invention are also described and claimed.
Abstract: A system and method are provided to recover lost flexible volumes of an aggregate capable of supporting flexible volumes. The method includes discovering lost flexible volumes of the aggregate and recovering them. Wherein recovering a lost flexible volume includes creating and populating a new label file associated with an container inode.
Abstract: Data with a short useful lifetime are received and cached by a system. The system waits for the first to occur of two events. If the first event is a local cache flush trigger, the data is written to a longer-term storage subsystem. If the first event is a remote cache flush trigger, the data is discarded. Systems and methods to benefit from this procedure are described and claimed.
Abstract: A system is provided to improve performance of a storage system. The system comprises a multi-tier buffer cache. The buffer cache may include a global cache to store resources for servicing requests issued from one or more processes at the same time, a free cache to receive resources from the global cache and to store the received resources as free resources, and a local cache to receive free resources from the free cache, the received free resources to store resources that can be accessed by a single process at one time. The system may further include a buffer cache manager to manage transferring resources from the global cache to the free cache and from the free cache to the local cache.
Abstract: A method and system for injecting a deterministic fault into storage shelves in a storage subsystem. The method comprises injecting a known fault condition on demand into a hardware component in a storage shelf to cause a failure of the storage shelf. The hardware component incorporates a circuit that is configurable to select between a normal operating condition and a faulty condition of the hardware component. The method further comprises verifying that a reported failure is consistent with the known fault condition.
Abstract: An apparatus and a method that prevent a split-brain problem by preventing a cluster partner from accessing and serving data when the cluster partner is taken over by a storage server, while allowing early release of reservations on the cluster partner's storage devices before control is given back to the cluster partner.
Abstract: Various methods and apparatuses for storing a streaming media data play list in a cache memory are described. A streaming media data play list comprises a plurality of streaming media data entries associated with a single data pointer and the streaming media data entries comprise header data and payload data. In particular embodiments, a method for storing a streaming media data play list in a cache memory comprises receiving the streaming media data play list from a streaming media server, and storing the media data in the cache memory.
Abstract: A system and method are provided to method and system for portset data management. The system comprises a mass storage device to store a list of portset records; a network drivers layer to receive a request to add a new portset record to a list of portset records; and a portset update component to process the request. A portset may include a set of ports that provides access to logical unit numbers (LUNs). When the system receives a request to add a new portset, the portset update component may determine an available common index for the new portset record, associate the new portset record with the available common index, and update in memory representation of the list of records with the new portset record. The new portset record is then stored at a location on disk associated with the available common index for the new portset record.
Abstract: Embodiments of this invention disclose a file system having a hybrid file system format. The file system maintains certain data in two formats, thereby defining a hybrid file system format. In one exemplary application, the first format has properties favorable to write operations, e.g. a log-structured file system format, while the second format has properties favorable to read operations, e.g. an extent-based file system format. The data is stored in the first file system format and then asynchronously stored in the second file system format. The data stored in the second file system format are also updated asynchronously.
Abstract: A storage processor is interposed between initiators and storage targets, such as storage appliances or storage devices in a storage network. The storage processor presents a target interface to the initiators and an initiator interface to the targets, and the storage processor transparently intercepts and processes commands, data and/or status information (such as iSCSI R2T and data PDUs) sent between the initiators and the targets. The storage processor presents a virtual device to the initiators and transparently implements the virtual device on the targets, such as a RAID-1. The storage processor negotiates acceptable operational parameters, such as maximum segment size, with the initiators and targets, such that the storage processor can pass data received from the initiators to the targets, without excessive data buffering.
Abstract: A group of data storage units are serially connected in a sequential data communication path to communicate read and write operations to first and second interfaces of each data storage unit in the group. A data management computer device (“filer”) manages read and write operations of the data storage units of the group through an adapter of the filer. Main and redundant primary communication pathway connectors extend from the filer to the interfaces of the data storage unit, thereby establishing redundancy through multiple pathways to communicate the read and write operations to the data storage units of the group. Main and redundant secondary communication pathway connectors extend from partner filers to the groups of data storage units associated with each partner filer, thereby further enhancing redundancy.
Abstract: A dynamic cache system is configured to flexibly respond to changes in operating parameters of a data storage and retrieval system. A cache controller in the system implements a caching policy describing how and what data should be cached. The policy can provide different caching behavior based on exemplary parameters such as a user ID, a specified application or a given workload. The cache controller is coupled to the data path for a data storage system and can be implemented as a filter in a filter framework. The cache memory for storing cached data can be local or remote from the cache controller. The policies implemented in the cache controller permit an application to control caching of data to permit optimization of data flow for the particular application.
Abstract: In at least one embodiment of the invention, a primary storage facility is managed in an HSM system. Data is relocated from the primary storage facility to a secondary storage facility. A request is received from a client for only a portion of the relocated data. In response to the request, the requested portion of the data is obtained from the secondary storage facility and stored in the primary storage facility as a sparse file. The requested portion of the data is then provided to the client from the sparse file.