Abstract: A processor, operable in a computing storage environment, allocates portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks, and processes a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB).
Type:
Grant
Filed:
September 14, 2012
Date of Patent:
September 9, 2014
Assignee:
International Business Machines Corporation
Inventors:
Kevin John Ash, Michael Thomas Benhase, Lokesh Mohan Gupta, Kenneth Wayne Todd
Abstract: A system, storage medium, and method for structuring data are provided. The system connects to a storage device that stores original data. The method obtains the original data from the storage device, and stores the original data in the form of character strings into a buffer memory according to end of file-line (EOF) tags. The method further constructs data arrays to store the character strings, and arranges each of the data arrays into a data matrix. In addition, the method classifies each of the data arrays in the data matrix according to properties of the character strings, arranges the classified data arrays into a data file, and stores the data file into the buffer memory.
Abstract: Memory devices, methods for programming sense flags, methods for sensing flags, and memory systems are disclosed. In one such memory device, the odd bit lines of a flag memory cell array are connected with a short circuit to a dynamic data cache. The even bit lines of the flag memory cell array are disconnected from the dynamic data cache. When an even page of a main memory cell array is read, the odd flag memory cells, comprising flag data, are read at the same time so that it can be determined whether the odd page of the main memory cell array has been programmed. If the flag data indicates that the odd page has not been programmed, threshold voltage windows can be adjusted to determine the states of the sensed even memory cell page.
Type:
Application
Filed:
November 9, 2010
Publication date:
May 10, 2012
Inventors:
Shafqat Ahmed, Khaled Hasnat, Pranav Kalavade, Krishna Parat, Aaron Yip, Mark A. Helm, Andrew Bicksler
Abstract: Systems and/or methods that facilitate data management on a memory device are presented. A data management component can log and tag data creating data tags. The data tags can comprise static metadata, dynamic metadata or a combination thereof. The data management component can perform file management to allocate placement of data and data tags to the memory or to erase data from the memory. Allocation and erasure are based in part on the characteristics of the data tags, and can follow embedded rules, an intelligent component or a combination thereof. The data management component can provide a search activity that can utilize the characteristics of the data tags and an intelligent component. The data management component can thereby optimize the useful life, increase operating speed, improve accuracy and precision, improve efficiency of non-volatile (e.g., flash) memory and provide improved functionality to memory devices.
Abstract: A cache management system providing improved page latching methodology. A method providing access to data in a multi-threaded computing system comprises: providing a cache containing data pages and a mapping to pages in memory of the multi-threaded computing system; associating a latch with each page in cache to regulate access, the latch allowing multiple threads to share access to the page for reads and a single thread to obtain exclusive access to the page for writes; in response to a request from a first thread to read a particular page, determining whether the particular page is in cache without acquiring any synchronization object regulating access and without blocking access by other threads; if the particular page is in cache, reading the particular page unless another thread has exclusively latched the particular page; and otherwise, if the particular page is not in cache, bringing the page into cache.
Abstract: Each array in a sequence of arrays is reordered. A first port receives in a first serial order a number of values in each array in the sequence and a second port transmits the values in a different second serial order. For each value in each array in the sequence, the address generator generates an address within a range of zero through one less than the number of values in the array. For each address from the generator, the memory performs an access to a location corresponding to the address in the memory. The access for each address includes a read from the location before a write to the location. For each array in the sequence, the writes for the addresses serially write the values of the array in the first serial order and the reads for the addresses serially read the values in the second serial order.
Type:
Grant
Filed:
September 27, 2006
Date of Patent:
October 27, 2009
Assignee:
Xilinx, Inc.
Inventors:
Hemang Maheshkumar Parekh, Jeffrey Allan Graham, Hai-Jo Tarn, Elizabeth R. Cowie, Vanessa Yi-Mei Chou