Abstract: A data storage system is disclosed that utilizes a high performance caching architecture. In one embodiment, the caching architecture utilizes a cache table, such as a lookup table, for referencing or storing host data units that are cached or are candidates for being cached in the solid-state memory. Further, the caching architecture maintains a segment control list that specifies associations between particular cache table entries and particular data segments. Such separation of activities related to the implementation of a caching policy from activities related to storing cached data and candidate data provides robustness and scalability while improving performance.
Type:
Grant
Filed:
January 31, 2012
Date of Patent:
November 25, 2014
Assignee:
Western Digital Technologies, Inc.
Inventors:
Chandra M. Guda, Michael Ainsworth, Choo-Bhin Ong, Marc-Angelo P. Carino
Abstract: A method, a statistics subsystem, and a system use a combination of commercially available high speed memory and high density low speed memory to mitigate cost, space, control, and power issues associated with storing counters for statistics updates, while meeting the growing width and depth needs of multi-hundred gigabit Carrier Class data network devices. The method, statistics subsystem, and system offer a Counter Management Algorithm (CMA) that relies on rollover bits stored within data of counters. An update to the low speed memory is substantially faster than a rollover time for the counter in the high speed memory thereby allowing statistics to be cached in the high speed memory while updates take place to the low speed memory.
Type:
Grant
Filed:
November 21, 2012
Date of Patent:
November 11, 2014
Assignee:
Ciena Corporation
Inventors:
Kenneth Edward Neudorf, Richard Robb, Kelly Donald Fromm, J. Kevin Seacrist
Abstract: Decoding a content of interest with optimal power usage. In an embodiment, a central processing unit (CPU) retrieves the frames of a data stream of interest from a secondary storage and stores them in a random access memory (RAM). The CPU forms an index table indicating the locations at which each of the frames is stored. The index table is provided to a decoder, which processes the frames in sequence to recover the original data from the encoded data. By using the index information, the power usage is reduced at least in an embodiment when the decoding is performed by an auxiliary processor.
Type:
Grant
Filed:
December 10, 2008
Date of Patent:
November 4, 2014
Assignee:
Nvidia Corporation
Inventors:
Chandrasekhar Morisetti, Susmitha V P N D Gummalla, Murali Mohan Kakarla, Jim Van Welzen
Abstract: A storage system in a remote copy configuration includes a redirect mechanism. The redirect mechanism determines whether to redirect read operations to a remote storage system, which is part of the remote copy configuration, based on a power management policy and a redirect policy. The redirect mechanism takes into account response time data, input/output demand, power utilization data, and input/output classes and priorities to determine whether to redirect read access requests to the remote storage system. Redirection of read operations to the remote storage system results in reduced power consumption at the local system.
Type:
Grant
Filed:
July 24, 2012
Date of Patent:
November 4, 2014
Assignee:
International Business Machines Corporation
Abstract: A portable device includes n (n?2) electrical sockets, each of which is configured to accommodate and to electrically engage a removable external memory card; an input device for selecting accommodated and electrically engaged external memory cards for data reading; and an output device for outputting information that is derived from or related to data read from such selected electrically engaged external memory cards. The information may pertain to digital content of the selected external memory card, to the identity of the selected external memory card, or to the storage capacity of the selected external memory card.
Abstract: Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache.
Abstract: A semiconductor storage device and a method of throttling performance of the same are provided. The semiconductor storage device includes a non-volatile memory device configured to store data in a non-volatile state, and a controller configured to control the non-volatile memory device. The controller calculates a new performance level, compares the calculated performance level with a predetermined reference, and determines the calculated performance level as an updated performance level according to the comparison result.
Type:
Grant
Filed:
June 22, 2011
Date of Patent:
October 14, 2014
Assignee:
Samsung Electronics Co., Ltd
Inventors:
Han Bin Yoon, Yeong-Jae Woo, Dong Gi Lee, Young Kug Moon, Hyuck-Sun Kwon
Abstract: A semiconductor storage device (SSD) and a method of throttling performance of the SSD are provided. The method can include includes gathering at least two workload data items related with to a workload of the semiconductor storage device, estimating the workload using the at least two workload data items, and throttling the performance of the semiconductor storage device according to the estimated workload. Accordingly, a workload that the semiconductor storage device will undergo can be estimated.
Type:
Grant
Filed:
June 22, 2011
Date of Patent:
October 14, 2014
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Han Bin Yoon, Yeong-Jae Woo, Dong Gi Lee, Young Kug Moon, Hyuck-Sun Kwon
Abstract: In one embodiment of the invention, a system is disclosed including a master memory controller and a plurality of memory modules coupled to the master memory controller. Each memory module includes a plurality of read-writeable non-volatile memory devices in a plurality of memory slices to form a two-dimensional array of memory. Each memory slice in each memory module includes a slave memory controller coupled to the master memory controller. When the master memory controller issues a memory module request, it is partitioned into a slice request for each memory slice.
Abstract: Mixed-granularity higher-level redundancy for NVM provides improved higher-level redundancy operation with better error recovery and/or reduced redundancy information overhead. For example, pages of the NVM that are less reliable, such as relatively more prone to errors, are operated in higher-level redundancy modes having relatively more error protection, at a cost of relatively more redundancy information. Concurrently, blocks of the NVM that are more reliable are operated in higher-level redundancy modes having relatively less error protection, at a cost of relatively less redundancy information. Compared to techniques that operate the entirety of the NVM in the higher-level redundancy modes having relatively less error protection, techniques described herein provide better error recovery. Compared to techniques that operate the entirety of the NVM in the higher-level redundancy modes having relatively more error protection, the techniques described herein provide reduced redundancy information overhead.
Abstract: A semiconductor storage device and a method of throttling performance of the same are provided. The semiconductor storage device includes a non-volatile memory device, and a controller configured to receive a write command from a host and program and to write data received from the host to the non-volatile memory device in response to the write command. The controller inserts idle time after receiving the write data from the host and/or after programming the write data to the non-volatile memory device.
Type:
Grant
Filed:
June 22, 2011
Date of Patent:
October 7, 2014
Assignee:
Samsung Electronics Co., Ltd
Inventors:
Han Bin Yoon, Yeong-Jae Woo, Dong Gi Lee, Kwang Ho Kim, Hyuck-Sun Kwon
Abstract: A memory system includes a management-information restoring unit. The management-information restoring unit determines whether a short break has occurred referring to a pre-log or a post-log in a NAND memory. The management-information restoring unit determines that a short break has occurred when the pre-log or the post-log is present in the NAND memory. In that case, the management-information restoring unit determines timing of occurrence of the short break, and, after selecting a pre-log or a post-log used for restoration, performs restoration of the management information reflecting these logs on a snapshot. Thereafter, the management-information restoring unit applies recovery processing to all write-once blocks in the NAND memory, takes the snapshot again, and opens the snapshot and the logs in the past.
Abstract: The system includes first and second storage systems. The first storage system includes a first control unit managing a plurality of logical units (LUs) and a plurality of first storage devices being controlled to store data by the first control unit, the plurality of LUs including a first type LU and a second type LU, the first type LU corresponding to at least one of the plurality of first storage devices of the first storage system so that data to be stored to the first type LU is stored to the at least one of the plurality of first storage devices of the first storage system, the second type LU mapping to an LU which is managed by a second storage system so that data to be stored to the second type LU is transferred to the LU managed by the second storage system.
Abstract: A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy.
Type:
Grant
Filed:
August 13, 2012
Date of Patent:
September 23, 2014
Assignee:
International Business Machines Corporation
Inventors:
Jian Li, Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
Abstract: A method and apparatus for efficient memory bank utilization in multi-threaded packet processors is presented. A plurality of memory access requests, are received and are buffered by a plurality of memory First In First Out (FIFO) buffers, each of the memory FIFO buffers in communication with a memory controller. The memory access requests are distributed evenly across said memory banks by way of the memory controller. This reduces and/or eliminates memory latency which can occur when sequential memory operations are performed on the same memory bank.
Type:
Grant
Filed:
November 24, 2010
Date of Patent:
September 9, 2014
Assignee:
Avaya Inc.
Inventors:
Hamid Assarpour, Mike Craren, Rich Modelski
Abstract: Embodiments of the present invention provide a system, method, and program product for defragmenting files on a hard disk drive. A computer system identifies a plurality of movable blocks on a hard disk drive. The computer system categorizes each of the movable blocks into a category based on the write count of each movable block, wherein the movable blocks categorized into a first category have higher write counts than the movable blocks categorized into a second category. The computer system relocates the movable blocks of the first category to a first group of one or more adjacent tracks, and the computer system relocates the movable blocks of the second category to a second group of one or more adjacent tracks, wherein the first group of one or more adjacent tracks and the second group of one or more adjacent tracks share, at most, one common track.
Type:
Grant
Filed:
March 21, 2012
Date of Patent:
September 2, 2014
Assignee:
International Business Machines Corporation
Inventors:
Sandeep R. Patil, Sriram Ramanathan, Riyazahamad M. Shiraguppi, Matthew B. Trevathan
Abstract: Embodiments of the present invention provide a system, method, and program product for allocating a block of physical storage space on a write surface of a hard disk drive. A computer system maintains a write count for each block on the hard disk drive. After receiving an allocation request, the computer system identifies one or more candidate blocks of storage space on the hard disk drive that can be selected to fulfill the allocation request. The computer system determines an estimated write count and identifies one or more allocated blocks whose write counts are within a specified number of write operations of the estimated write count. The computer system selects a candidate block based, at least in part, on physical proximity of the candidate block to one or more of the allocated blocks whose write counts are within a specified number of write operations of the estimated write count.
Type:
Grant
Filed:
March 21, 2012
Date of Patent:
August 26, 2014
Assignee:
International Business Machines Corporation
Inventors:
Sandeep R. Patil, Sriram Ramanathan, Riyazahamad M. Shiraguppi, Matthew B. Trevathan
Abstract: In one embodiment, the present invention includes a translation lookaside buffer (TLB) to store entries each having a translation portion to store a virtual address (VA)-to-physical address (PA) translation and a second portion to store bits for a memory page associated with the VA-to-PA translation, where the bits indicate attributes of information in the memory page. Other embodiments are described and claimed.
Type:
Grant
Filed:
July 17, 2012
Date of Patent:
August 26, 2014
Assignee:
Intel Corporation
Inventors:
David Champagne, Abhishek Tiwari, Wei Wu, Christopher J. Hughes, Sanjeev Kumar, Shih-Lien Lu
Abstract: Embodiments of the disclosure provide a system and method for dynamically allocating storage capacity in a user equipment buffer. In various embodiments of the invention, a plurality of transport blocks associated with a process are stored in a plurality of subpartitions of a partition of a buffer in a user equipment device.
Type:
Grant
Filed:
April 29, 2013
Date of Patent:
August 19, 2014
Assignee:
Apple Inc.
Inventors:
Jayesh H. Kotecha, Ning Chen, Ian C. Wong
Abstract: The storage of single or multiple references of the same data block in a storage pool is disclosed. Indexing of the data includes storing reference information in the storage pool as a mapping table. The mapping table indexes each data block in the storage pool. On any read or write request mapping information is used to retrieve the corresponding data block in storage pool.