Abstract: Emulating tape data includes providing a first storage device coupled to a host, providing a tape emulation unit coupled to the host, the tape emulation unit including a data mover, and, in response to a command to transfer data between the first storage device and the tape emulation unit, transferring data directly between the first storage device and the data mover using a link therebetween, where data that is transferred bypasses the host. The tape emulation unit may include a front end component coupled to the host and a second storage device, the data mover being interposed between the second storage device and the front end component. The front end component may be coupled to the data mover using a GigE switch. The data mover may use NFS to access data. At least one of the first and second data storage devices may be data storage arrays.
Abstract: A method of managing a data storage cache, comprising: providing a redundant cache comprising first and second caches associated with first and second storage volumes. One of the first and second storage volumes is an active, and one is a passive volume. A write request is received at one of the volumes. If the write request is received at the passive volume, it is forwarded to the active volume. It is determined whether the active volume is a low latency volume. If it is a low latency volume, it is determined whether data exists in the cache associated with the active volume which overlaps with data contained in the write request. If no data exists in that cache which overlaps with data contained in the write request, the write request is processed straight down to said active volume.
Type:
Grant
Filed:
September 27, 2016
Date of Patent:
September 18, 2018
Assignee:
International Business Machines Corporation
Inventors:
Ian Boden, Nicolas M. Clayton, Lee J. Sanders, William J. Scales, Barry D. Whyte
Abstract: A system and method for handling requests by virtual machines (VMs) to lock portions of main memory are disclosed. In accordance with one embodiment, a host operating system (OS) of a computer system receives a request by the guest OS of a VM to lock a portion of main memory of the computer system. The host OS determines whether locking the portion of main memory violates any of a set of constraints pertaining to main memory. The host OS locks the portion of main memory when locking does not violate any of the set of constraints. The locking prevents any page of the portion of main memory from being swapped out to a storage device. The host OS can still swap out pages of main memory that are not allocated to this VM and are not locked by any other VM.
Abstract: A storage layer (SL) for a non-volatile storage device presents a logical address space of a non-volatile storage device to storage clients. Storage metadata assigns logical identifiers in the logical address space to physical storage locations on the non-volatile storage device. Data is stored on the non-volatile storage device in a sequential log-based format. Data on the non-volatile storage device comprises an event log of the storage operations performed on the non-volatile storage device. The SL presents an interface for requesting atomic storage operations. Previous versions of data overwritten by the atomic storage device are maintained until the atomic storage operation is successfully completed. Data pertaining to a failed atomic storage operation may be identified using a persistent metadata flag stored with the data on the non-volatile storage device. Data pertaining to failed or incomplete atomic storage requests may be invalidated and removed from the non-volatile storage device.
Type:
Grant
Filed:
July 28, 2011
Date of Patent:
July 3, 2018
Assignee:
SANDISK TECHNOLOGIES LLC
Inventors:
David Flynn, Stephan Uphoff, Xiangyong Ouyang, David Nellans, Robert Wipfel
Abstract: A process for caching data according to one embodiment includes maintaining a random data list and a sequential data list, and dynamically establishing a desired size of the sequential data list. Dynamically establishing the desired size of the sequential data list includes increasing the desired size of the sequential data list in response to at least one of: detecting a hit on a bottom of the sequential data list, and detecting a read miss on sequential tracks. Dynamically establishing the desired size of the sequential data list also includes decreasing the desired size of the sequential data list in response to detecting a hit on a bottom of the random data list.
Type:
Grant
Filed:
September 3, 2015
Date of Patent:
June 12, 2018
Assignee:
International Business Machines Corporation
Inventors:
Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos, Rose L. Manz
Abstract: A system and method for managing data contained by a storage library includes at least one storage library and a library controller configured to generate signals that control operations of the storage library. The system further includes at least one client interface operable with the library controller and being adapted to receive requests from multiple client types and communicate those requests to the library controller. Additionally, the library controller generates signals for the storage library and the storage library performs operations that correspond to the requests from the multiple client types.
Abstract: Techniques for implementing data deduplication in conjunction with thick and thin provisioning of storage objects are provided. In one embodiment, a system can receive a write request directed to a storage object stored by the system and can determine whether the storage object is a thin or thick object. If the storage object is a thin object, the system can calculate a usage value by adding a total amount of physical storage space used in the system to a total amount of storage space reserved for thick storage objects in the system and further subtracting a total amount of reserved storage space for the thick storage objects that are filled with unique data. The system can then reject the write request if the usage value is not less than the total storage capacity of the system.
Type:
Grant
Filed:
January 12, 2016
Date of Patent:
May 22, 2018
Assignee:
VMware, Inc.
Inventors:
Jorge Guerra Delgado, Kiran Joshi, Edward J Goggin, Srinath Premachandran, Sandeep Rangaswamy
Abstract: A page compression strategy classifies uncompressed pages selected for compression. Similarly classified pages are compressed and bound into a single logical page. For logical pages having pages with more than one classification, a weighting factor is determined for the logical page.
Type:
Grant
Filed:
September 11, 2017
Date of Patent:
May 15, 2018
Assignee:
International Business Machines Corporation
Inventors:
Suma M. B. Bhat, Chetan L. Gaonkar, Vamshi K. Thatikonda
Abstract: According to one embodiment, a memory system including a key-value store containing key-value data as a pair of a key and a value corresponding to the key, includes an interface, a memory block, an address acquisition circuit and a controller. The interface receives a data write/read request or a request based on the key-value store. The memory block has a data area for storing data and a metadata table containing the key-value data. The address acquisition circuit acquires an address in response to input of the key. The controller executes the data write/read request for the memory block, and outputs the address acquired to the memory block and executes the request based on the key-value store. The controller outputs the value corresponding to the key via the interface.
Abstract: Systems, methods, and computer programs are disclosed for providing non-volatile system memory with volatile memory program caching. One such method comprises storing an executable program in a non-volatile random access memory. In response to an initial launch of the executable program, the executable program is loaded from the non-volatile random access memory into a volatile memory cache for execution. In response to an initial suspension of the executable program, cache pages corresponding to the executable program are flushed into the non-volatile random access memory.
Abstract: Systems, methods, and/or devices are used to manage a storage system. In one aspect, the method includes, during a first time period: maintaining a credit pool for the first time period; limiting bandwidth used for transmitting data between a storage device of the storage system and a host operatively coupled with the storage device according to a status of the credit pool, where the storage device includes one or more memory devices; monitoring a temperature of the storage device; and, in accordance with a determination that a current temperature of the storage device exceeds a predetermined threshold temperature and the current temperature of the storage device satisfies one or more temperature criteria, reducing an initial value of the credit pool for a second time period according to a first adjustment factor corresponding to the predetermined temperature threshold, where the second time period is subsequent to the first time period.
Type:
Grant
Filed:
March 25, 2015
Date of Patent:
March 13, 2018
Assignee:
SanDisk Technologies LLC
Inventors:
Senthil M. Thangaraj, Divya Reddy, Satish Babu Vasudeva, Jimmy Sy, Rodney Brittner, Venkatesh K. Paulsamy