Least Recently Used Patents (Class 711/136)
  • Publication number: 20130024624
    Abstract: Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request.
    Type: Application
    Filed: July 22, 2011
    Publication date: January 24, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, James L. Hafner
  • Publication number: 20130024625
    Abstract: Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request.
    Type: Application
    Filed: May 24, 2012
    Publication date: January 24, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, James L. Hafner
  • Publication number: 20130013866
    Abstract: A method includes updating a first tag access indicator of a storage structure. The tag access indicator indicates a number of accesses by a first thread executing on a processor to a memory resource for a portion of memory associated with a memory tag. The updating is in response to an access to the memory resource for a memory request associated with the first thread to the portion of memory associated with the memory tag. The method may include updating a first sum indicator of the storage structure indicating a sum of numbers of accesses to the memory resource being associated with a first access indicator of the storage structure for the first thread, the updating being in response to the access to the memory resource.
    Type: Application
    Filed: July 8, 2011
    Publication date: January 10, 2013
    Inventors: Lisa Hsu, Shekhar Srikantaiah, Jaewoong Chung
  • Patent number: 8352684
    Abstract: Computer implemented method, system and computer usable program code for cache management. A cache is provided, wherein the cache is viewed as a sorted array of data elements, wherein a top position of the array is a most recently used position of the array and a bottom position of the array is a least recently used position of the array. A memory access sequence is provided, and a training operation is performed with respect to a memory access of the memory access sequence to determine a type of memory access operation to be performed with respect to the memory access. Responsive to a result of the training operation, a cache replacement operation is performed using the determined memory access operation with respect to the memory access.
    Type: Grant
    Filed: September 23, 2008
    Date of Patent: January 8, 2013
    Assignee: International Business Machines Corporation
    Inventors: Roch Georges Archambault, Shimin Cui, Chen Ding, Yaoqing Gao, Xiaoming Gu, Raul Esteban Silvera, Chengliang Zhang
  • Publication number: 20130007373
    Abstract: A method, apparatus, and system for replacing at least one cache region selected from a plurality of cache regions, wherein each of the regions is composed of a plurality of blocks is disclosed. The method includes applying a first algorithm to the plurality of cache regions to limit the number of potential candidate regions to a preset value, wherein the first algorithm assesses the ability of a region to be replaced based on properties of the plurality of blocks associated with that region; and designating at least one of the limited potential candidate regions as a victim based region level information associated with each of the limited potential candidate regions.
    Type: Application
    Filed: June 30, 2011
    Publication date: January 3, 2013
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Bradford M. Beckmann, Arkaprava Basu, Steven K. Reinhardt
  • Patent number: 8341350
    Abstract: A method for metadata management in a storage system may include providing a metadata queue of a maximum size; determining whether the metadata for a particular sub-LUN is held in the metadata queue; updating the metadata for the particular sub-LUN when the metadata for the particular sub-LUN is held in the metadata queue; inserting the metadata for the particular sub-LUN at the head of the metadata queue when the metadata queue is not full and the metadata is not held in the metadata queue; replacing an entry in the metadata queue with the metadata for the particular sub-LUN and moving the metadata to the head of the metadata queue when the metadata queue is full and the metadata is not held in the metadata queue; and controlling the number of sub-LUNs in the storage system to manage data accessed with respect to an amount of available data storage.
    Type: Grant
    Filed: February 3, 2011
    Date of Patent: December 25, 2012
    Assignee: LSI Corporation
    Inventors: Martin Jess, Brian McKean
  • Publication number: 20120324171
    Abstract: An apparatus and method for copying data are disclosed. A data track to be replicated using a peer-to-peer remote copy (PPRC) operation is identified. The data track is encoded in a non-transitory computer readable medium disposed in a first data storage system. At a first time, a determination of whether the data track is stored in a data cache is made. At a second time, the data track is replicated to a non-transitory computer readable medium disposed in a second data storage system. The second time is later than the first time. If the data track was stored in the data cache at the first time, a cache manager is instructed to not demote the data track from the data cache. If the data track was not stored in the data cache at the first time, the cache manager is instructed that the data track may be demoted.
    Type: Application
    Filed: June 14, 2011
    Publication date: December 20, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Lokesh M. Gupta, Joseph S. Hyde, II, Warren K. Stanley
  • Publication number: 20120324172
    Abstract: An apparatus for performing data caching comprises at least one cache memory including multiple cache lines arranged into multiple segments, each segment having a subset of the cache lines associated therewith. The apparatus further includes a first plurality of counters, each of the counters being operative to track a number of active cache lines associated with a corresponding one of the segments. At least one controller included in the apparatus is operative to receive information relating to the number of active cache lines associated with a corresponding segment from the first plurality of counters and to implement a cache segment replacement policy for determining which of the segments to replace as a function of at least the information relating to the number of active cache lines associated with a corresponding segment.
    Type: Application
    Filed: June 17, 2011
    Publication date: December 20, 2012
    Applicant: LSI CORPORATION
    Inventors: Alexander Rabinovitch, Leonid Dubrovin
  • Patent number: 8332586
    Abstract: The present invention obtains with high precision, in a storage system, the effect of additional installation or removal of cache memory, that is, the change of the cache hit rate and the performance of the storage system at that time. For achieving this, when executing normal cache control in the operational environment of the storage system, the cache hit rate when the cache memory capacity has changed is also obtained. Furthermore, with reference to the obtained cache hit rate, the peak performance of the storage system is obtained. Furthermore, with reference to the target performance, the cache memory and the number of disks and other resources that are additionally required are obtained.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: December 11, 2012
    Assignee: Hitachi, Ltd.
    Inventors: Masanori Takada, Shuji Nakamura, Kentaro Shimada
  • Publication number: 20120311248
    Abstract: A system that includes a memory, a cache, a purge mechanism, and a memory interface mechanism. The memory includes a failing memory element at a failing memory location. The cache is configured for storing corrected contents of the failing memory element in a locked state, with the corrected contents stored in a first cache line. The purge mechanism is configured for selecting and removing cache lines that are not in the locked state from the cache to make room for new cache allocations. The memory interface mechanism is configured for receiving a request to access the failing memory location, determining that corrected contents of the failing memory location are stored in first cache line in the cache, and accessing the first cache line in the cache.
    Type: Application
    Filed: June 3, 2011
    Publication date: December 6, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Benjiman L. Goodman
  • Patent number: 8327076
    Abstract: The disclosure is related to data storage systems having multiple cache and to management of cache activity in data storage systems having multiple cache. In a particular embodiment, a data storage device includes a volatile memory having a first read cache and a first write cache, a non-volatile memory having a second read cache and a second write cache and a controller coupled to the volatile memory and the non-volatile memory. The memory can be configured to selectively transfer read data from the first read cache to the second read cache based on a least recently used indicator of the read data and selectively transfer write data from the first write cache to the second write cache based on a least recently written indicator of the write data.
    Type: Grant
    Filed: May 13, 2009
    Date of Patent: December 4, 2012
    Assignee: Seagate Technology LLC
    Inventors: Robert D. Murphy, Robert W. Dixon, Steven S. Williams
  • Publication number: 20120303904
    Abstract: Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.
    Type: Application
    Filed: May 21, 2012
    Publication date: November 29, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Keneth W. Todd
  • Publication number: 20120303872
    Abstract: Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.
    Type: Application
    Filed: April 25, 2012
    Publication date: November 29, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Matthew J. Kalos
  • Publication number: 20120303905
    Abstract: In embodiments of the present invention, a file access request sent by an application to a hard disk is obtained, file information of the accessed file is acquired according to the request, the file accessed by the application is fragmented to obtain at least one file fragment, a condition for copying the file fragment from the hard disk to the cache is set, and the file fragment is copied to the cache when the copying condition is met in a storage unit. Compared with a technical solution in the prior art where the file is copied to the cache, utilization efficiency of the cache is effectively improved.
    Type: Application
    Filed: August 9, 2012
    Publication date: November 29, 2012
    Inventors: Wei ZHANG, Mingchang WEI, Zhixin CHEN
  • Publication number: 20120290795
    Abstract: System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable.
    Type: Application
    Filed: July 9, 2012
    Publication date: November 15, 2012
    Applicant: AT&T MOBILITY II LLC
    Inventor: Sangar Dowlatkhah
  • Publication number: 20120290786
    Abstract: A device, system, and method are disclosed. In one embodiment, a device includes caching logic that is capable of receiving an I/O storage request from an operating system. The I/O storage request includes an input/output (I/O) data type tag that specifies a type of I/O data to be stored or loaded with the I/O storage request. The caching logic is also capable of determining, based at least in part on a priority level associated with the I/O data type, whether to allocate cache to the I/O storage request.
    Type: Application
    Filed: May 11, 2011
    Publication date: November 15, 2012
    Inventor: Michael P. Mesnier
  • Patent number: 8312217
    Abstract: A method for storing data, comprises the steps of: defining one or more intervals for one or more virtual disks, wherein each of the intervals has data; receiving a storage command in a cache, wherein the command having a logical address and a data block; determining a respective interval for the data block corresponding to the logical address of the data block; determining whether the data of the respective interval is to be written to a corresponding storage unit; and receiving a next storage command.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: November 13, 2012
    Assignee: Rasilient Systems, Inc.
    Inventors: Yee-Hsiang Sean Chang, Yiqiang Ding, Bo Leng
  • Patent number: 8301842
    Abstract: An apparatus for allocating entries in a set associative cache memory includes an array that provides a first pseudo-least-recently-used (PLRU) vector in response to a first allocation request from a first functional unit. The first PLRU vector specifies a first entry from a set of the cache memory specified by the first allocation request. The first vector is a tree of bits comprising a plurality of levels. Toggling logic receives the first vector and toggles predetermined bits thereof to generate a second PLRU vector in response to a second allocation request from a second functional unit generated concurrently with the first allocation request and specifying the same set of the cache memory specified by the first allocation request. The second vector specifies a second entry different from the first entry from the same set. The predetermined bits comprise bits of a predetermined one of the levels of the tree.
    Type: Grant
    Filed: July 6, 2010
    Date of Patent: October 30, 2012
    Assignee: VIA Technologies, Inc.
    Inventors: Colin Eddy, Rodney E. Hooker
  • Publication number: 20120272010
    Abstract: A process for caching data in a cache memory includes upon detecting that a first page is in a first or second list, the first page is moved to a most recently used (MRU) position in the second list. Upon detecting that the first page is in a first history list, a first target size is updated to a second target size for the first and second lists, the first page is moved from the first history list to the MRU position in the second list, and the first page is fetched to the cache memory. Upon detecting that the first page is in a second history list, the second target size is updated to a third target size for the first and second lists, and the first page is moved from the second history list to the MRU position in the second list.
    Type: Application
    Filed: July 3, 2012
    Publication date: October 25, 2012
    Applicant: International Business Machines Corporation
    Inventors: James Allen Larkby-Lahet, Prashant Pandey
  • Patent number: 8296523
    Abstract: Embodiments of the present invention provide a method, system and computer program product for dual timer fragment caching. In an embodiment of the invention, a dual timer fragment caching method can include establishing both a soft timeout and also a hard timeout for each fragment in a fragment cache. The method further can include managing the fragment cache by evicting fragments in the fragment cache subsequent to a lapsing of a corresponding hard timeout. The management of the fragment cache also can include responding to multiple requests by multiple requestors for a stale fragment in the fragment cache with a lapsed corresponding soft timeout by returning the stale fragment from the fragment cache to some of the requestors, by retrieving and returning a new form of the stale fragment to others of the requestors, and by replacing the stale fragment in the fragment cache with the new form of the stale fragment with a reset soft timeout and hard timeout.
    Type: Grant
    Filed: December 31, 2009
    Date of Patent: October 23, 2012
    Assignee: International Business Machines Corporation
    Inventors: Rohit D. Kelapure, Gautam Singh, Christian Steege, Filip R. Zawadiak
  • Patent number: 8291169
    Abstract: A method of providing history based done logic includes receiving a cache line in a L2 cache; determining if the cache line has a history of access at least three times on a previous call into the L2 cache; providing the cache line directly to a processor if the history of access was less then the at least three times; and loading the cache line into an L1 cache if the history of access was the at least three times.
    Type: Grant
    Filed: May 28, 2009
    Date of Patent: October 16, 2012
    Assignee: International Business Machines Corporation
    Inventor: David A. Luick
  • Publication number: 20120254549
    Abstract: A non-volatile memory system includes a memory section having a non-volatile cache portion storing data in a binary format, a primary user data storage section that stores user data in multi-state format, and an update memory area where the memory system stores data updating user data previously stored in the primary user data. The memory system allows a maximum number of blocks for use in the update memory area. When the memory system receives updated data corresponding to user data already written into the primary user data storage section, it determines whether a block of memory is available in the update memory area. In response to determining that a block of memory is not available in the update memory area, the system determines a block of the update memory to remove from the update memory; copies the data content of the determined update block into the cache portion of the memory section; and subsequently writes the updated data into the update memory.
    Type: Application
    Filed: March 29, 2011
    Publication date: October 4, 2012
    Inventors: Neil David Hutchison, Robert George Young
  • Patent number: 8281077
    Abstract: An apparatus and method for providing media content to electronic equipment includes transferring media content to the electronic equipment, and using rules to determine how pre-existing media content and the cached media content are stored in memory when free memory in the electronic equipment is insufficient to store the cached media content. At least part of the transferred media content is cached in memory of the electronic equipment for use at a later time.
    Type: Grant
    Filed: December 8, 2006
    Date of Patent: October 2, 2012
    Assignee: Sony Ericsson Mobile Communications AB
    Inventor: Edward C. Hyatt
  • Publication number: 20120246412
    Abstract: According to an embodiment, in a cache system, the sequence storage stores sequence data in association with each piece of data to be stored in the volatile cache memory in accordance with the number of pieces of data stored in the nonvolatile cache memory that have been unused for a longer period of time than the data stored in the volatile cache memory or the number of pieces of data stored in the nonvolatile cache memory that have been unused for a shorter period of time than the data stored in the volatile cache memory. The controller causes the first piece of data to be stored in the nonvolatile cache memory in a case where it can be determined that the first piece of data has been unused for a shorter period of time than any piece of the data stored in the nonvolatile cache memory.
    Type: Application
    Filed: September 16, 2011
    Publication date: September 27, 2012
    Inventors: Kumiko NOMURA, Keiko ABE, Shinobu FUJITA
  • Patent number: 8271750
    Abstract: A data processing system includes a data store having storage locations storing entries which can be used for a variety of purposes, such as operand value prediction, branch prediction, etc. An entry profile store stores profile data for more candidate entries than there are storage locations within the data store. The profile data is used to determine replacement policy for entries within the data store. The profile data can include hash values used to determine whether predictions associated with candidate entries were correct without having to store the full predictions within the profile data.
    Type: Grant
    Filed: January 18, 2008
    Date of Patent: September 18, 2012
    Assignee: ARM Limited
    Inventors: Sami Yehia, Marios Kleanthous
  • Patent number: 8261022
    Abstract: A method and apparatus are disclosed for locking the most recently accessed frames in a cache memory. The most recently accessed frames in a cache memory are likely to be accessed by a task again in the near future. The most recently used frames may be locked at the beginning of a task switch or interrupt to improve the performance of the cache. The list of most recently used frames is updated as a task executes and may be embodied, for example, as a list of frames addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold.
    Type: Grant
    Filed: October 9, 2001
    Date of Patent: September 4, 2012
    Assignee: Agere Systems Inc.
    Inventors: Harry Dwyer, John Susantha Fernando
  • Patent number: 8255637
    Abstract: A mass storage system and method incorporates a cache memory or a cache management module which handles dirty data using an access-based promotion replacement process through consistency checkpoints. The consistency checkpoints are associated with a global number of snapshots generated in the storage system. The consistency checkpoints are organized within the sequence of dirty data in an invariable order corresponding to storage volumes with the generated snapshots, such that, responsive to destaging a consistency checkpoint the global number of generated snapshots are recorded and then read during recovery of the failed storage system.
    Type: Grant
    Filed: September 27, 2010
    Date of Patent: August 28, 2012
    Assignee: Infinidat Ltd.
    Inventor: Yechiel Yochai
  • Publication number: 20120215986
    Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least- recently-used queue.
    Type: Application
    Filed: April 30, 2012
    Publication date: August 23, 2012
    Inventor: Ivan Schreter
  • Publication number: 20120210058
    Abstract: Methods, systems, and computer programs for managing storage using a solid state drive (SSD) read cache memory are presented. One method includes an operation for determining whether data corresponding to a read request is available in a SSD memory when the read request causes a miss in a memory cache. The read request is served from the SSD memory when the data is available in the SSD memory, and when the data is not available in the SSD memory, SSD memory tracking logic is invoked and the read request is served from a hard disk drive. Invoking the SSD memory tracking logic includes determining whether a fetch criteria for the data has been met, and loading the data corresponding to the read request in the SSD memory when the fetch criteria has been met. The use of the SSD as a read cache improves memory performance for random data reads.
    Type: Application
    Filed: April 24, 2012
    Publication date: August 16, 2012
    Applicant: Adaptec, Inc.
    Inventors: Steffen Mittendorff, Dieter Massa
  • Patent number: 8244981
    Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: August 14, 2012
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Patent number: 8239631
    Abstract: A system and method for replacing data in a cache utilizes cache block validity information, which contains information that indicates that data in a cache block is no longer needed for processing, to maintain least recently used information of cache blocks in a cache set of the cache, identifies the least recently used cache block of the cache set using the least recently used information of the cache blocks in the cache set, and replaces data in the least recently used cache block of the cache set with data from main memory.
    Type: Grant
    Filed: April 24, 2009
    Date of Patent: August 7, 2012
    Assignee: Entropic Communications, Inc.
    Inventors: Jan-Willem van de Waerdt, Johan Gerard Willem Maria Janssen, Maurice Penners
  • Patent number: 8239632
    Abstract: System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: August 7, 2012
    Assignee: AT&T Mobility II LLC
    Inventor: Sangar Dowlatkhah
  • Patent number: 8230174
    Abstract: A multi-queue FIFO memory device that uses existing pins of the device to load a desired number of queues (N) into a queue number register is provided. The queue number register is coupled to a queue size look-up table (LUT), which provides a queue size value in response to the contents of the queue number register. The queue size value indicates the amount of memory (e.g., the number of memory blocks) to be included in each of the N queues. The queue size value is provided to a queue start/end address generator, which automatically generates the start and end address associated with each queue in response to the queue size value. These start and end addresses are stored in queue address register files, which enable proper memory read/write and flag counter operations.
    Type: Grant
    Filed: January 21, 2005
    Date of Patent: July 24, 2012
    Assignee: Integrated Device Technology, Inc.
    Inventors: Mario Au, Jason Z. Mo, Xiaoping Fang
  • Patent number: 8219861
    Abstract: As a semiconductor storage device that can efficiently perform a refresh operation, provided is a semiconductor storage device comprising a non-volatile semiconductor memory storing data in blocks, the block being a unit of data erasing, and a controlling unit monitoring an error count of data stored in a monitored block selected from the blocks and refreshing data in the monitored block in which the error count is equal to or larger than a threshold value.
    Type: Grant
    Filed: October 11, 2011
    Date of Patent: July 10, 2012
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Toshikatsu Hida, Shinichi Kanno, Hirokuni Yano, Kazuya Kitsunai, Shigehiro Asano, Junji Yano
  • Patent number: 8219758
    Abstract: In an embodiment, a non-transparent memory unit is provided which includes a non-transparent memory and a control circuit. The control circuit may manage the non-transparent memory as a set of non-transparent memory blocks. Software executing on one or more processors may request a non-transparent memory block in which to process data. The control circuit may allocate a first block, and may return an address (or other indication) of the allocated block so that the software can access the block. The control circuit may also provide automatic data movement between the non-transparent memory and a main memory system to which the non-transparent memory unit is coupled. For example, the automatic data movement may include filling data from the main memory system to the allocated block, or flushing the data in the allocated block to the main memory system after the processing of the allocated block is complete.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: July 10, 2012
    Assignee: Apple Inc.
    Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
  • Patent number: 8214602
    Abstract: In one embodiment, a processor comprises a data cache and a load/store unit (LSU). The LSU comprises a queue and a control unit, and each entry in the queue is assigned to a different load that has accessed the data cache but has not retired. The control unit is configured to update the data cache hit status of each load represented in the queue as a content of the data cache changes. The control unit is configured to detect a snoop hit on a first load in a first entry of the queue responsive to: the snoop index matching a load index stored in the first entry, the data cache hit status of the first load indicating hit, the data cache detecting a snoop hit for the snoop operation, and a load way stored in the first entry matching a first way of the data cache in which the snoop operation is a hit.
    Type: Grant
    Filed: June 23, 2008
    Date of Patent: July 3, 2012
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Ashutosh S. Dhodapkar, Michael G. Butler
  • Patent number: 8214599
    Abstract: A system analyzes access patterns in a storage system. Logic circuitry in the system identifies different address regions of contiguously accessed memory locations. A statistical record identifies a number of storage accesses to the different address regions and a historical record identifies previous address regions accessed prior to the address regions currently being accessed. The logic circuitry is then used to prefetch data from the different address regions according to the statistical record and the historical record.
    Type: Grant
    Filed: October 23, 2009
    Date of Patent: July 3, 2012
    Assignee: GridIron Systems, Inc.
    Inventors: Erik de la Iglesia, Som Sikdar
  • Publication number: 20120124295
    Abstract: Methods and structure for automated determination and reconfiguration of the size of a cache memory in a storage system. Features and aspects hereof generate historical information regarding frequency of hits on cache lines in the cache memory. The history maintained is then analyzed to determine a desired cache memory size. The historical information regarding cache memory usage may be communicated to a user who may then direct the storage system to reconfigure its cache memory to a desired cache memory size. In other embodiments, the storage system may automatically determine the desired cache memory size and reconfigure its cache memory. The method may be performed automatically periodically, and/or in response to a user's request, and/or in response to detecting thrashing caused by least recently used (LRU) cache replacement algorithms in the storage system.
    Type: Application
    Filed: November 17, 2010
    Publication date: May 17, 2012
    Applicant: LSI CORPORATION
    Inventors: Donald R. Humlicek, Timothy R. Snider, Brian D. McKean
  • Publication number: 20120124296
    Abstract: A method and apparatus for controlling re-acquiring lines of memory in a cache is provided. The method comprises storing at least one atomic instruction in a queue in response to the atomic instruction being retired, and identifying a target memory location associated with load and store portions of the atomic instruction. A line of memory associated with the target memory location is acquired and stored in a cache. Subsequently, if the line of acquired memory is evicted, then it is re-acquired in response to the atomic instruction becoming the oldest instruction stored in the queue. The apparatus comprises a queue and a cache. The queue is adapted for storing at least one atomic instruction in response to the atomic instruction being retired. A target memory location is associated with load and store portions of the atomic instruction.
    Type: Application
    Filed: November 17, 2010
    Publication date: May 17, 2012
    Inventor: CHRISTOPHER D. BRYANT
  • Patent number: 8180970
    Abstract: A two pipe pass method for least recently used (LRU) compartment capture in a multiprocessor system. The method includes receiving a fetch request via a requesting processor and accessing a cache directory based on the received fetch request, performing a first pipe pass by determining whether a fetch hit or a fetch miss has occurred in the cache directory, and determining an LRU compartment associated with a specified congruence class of the cache directory based on the fetch request received, when it is determined that a fetch miss has occurred, and performing a second pipe pass by using the LRU compartment determined and the specified congruence class to access the cache directory and to select an LRU address to be cast out of the cache directory.
    Type: Grant
    Filed: February 22, 2008
    Date of Patent: May 15, 2012
    Assignee: International Business Machines Corporation
    Inventors: Arthur J. O'Neill, Jr., Michael F. Fee, Pak-kin Mak
  • Patent number: 8180969
    Abstract: A cache stores information in each of a plurality of cache lines. Addressing circuitry receives memory addresses for comparison with multiple ways of stored addresses to determine a hit condition representing a match of a stored address and a received address. A pseudo least recently used (PLRU) tree circuit stores one or more states of a PLRU tree and implements a tree having a plurality of levels beginning with a root and indicates one of a plurality of ways in the cache. Each level has one or more nodes. Multiple nodes within a same level are child nodes to a parent node of an immediately higher level. PLRU update circuitry that is coupled to the addressing circuitry and the PLRU tree circuit receives lock information to lock one or more lines of the cache and prevent a PLRU tree state from selecting a locked line.
    Type: Grant
    Filed: January 15, 2008
    Date of Patent: May 15, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Publication number: 20120117329
    Abstract: Combination based LRU caching employs a mapping mechanism in an LRU cache separate from a set of LRU caches for storing the values used in the combinations. The mapping mechanism is used to track the valid combinations of the values in the LRU caches storing the values resulting in any given value being stored at most once. Through the addition of a byte pointer significantly more combinations may be tracked in the same amount of cache memory with full LRU semantics on both the values and combinations.
    Type: Application
    Filed: November 9, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Jeffrey Anderson, David Lannoye
  • Publication number: 20120117328
    Abstract: A method for caching data in a storage medium implementing tiered data structures may include storing a first portion of critical data at the instruction of a storage control module. The first portion of critical data may be separated into data having different priority levels based upon at least one data utilization characteristic associated with a file system implemented by the storage control module. The method may also include storing a second portion of data at the instruction of the storage control module. The second storage medium may have at least one performance, reliability, or security characteristic different from the first storage medium.
    Type: Application
    Filed: November 4, 2010
    Publication date: May 10, 2012
    Applicant: LSI CORPORATION
    Inventors: Brian McKean, Mark Ish
  • Patent number: 8171228
    Abstract: Garbage collection associated with a cache with reduced complexity. In an embodiment, a relative rank is computed for each cache item based on relative frequency of access and relative non-idle time of cache entry compared to other entries. Each item having a relative rank less than a threshold is considered a suitable candidate for replacement. Thus, when a new item is to be stored in a cache, an entry corresponding to an identified item is used for storing the new item.
    Type: Grant
    Filed: November 12, 2009
    Date of Patent: May 1, 2012
    Assignee: Oracle International Corporation
    Inventor: Srinivasulu Dharmika Midda
  • Patent number: 8171227
    Abstract: A system and method determines when the entries of a reply cache, organized into microcaches each of which is allocated to a client connection, may be retired or released, thereby freeing up memory structures. A plurality of connection statistics are defined and tracked for each microcache and for the entries of the microcache. The connection statistics indicate the value of the microcache and its entries to the client. The connection statistics include a measure of the time since the last idempotent or non-idempotent request (TOLR) was received, and a count of the number of idempotent requests that have been received since the last non-idempotent request (RISLR). A microcache with a TOLR time and a RISLR count that exceed respective thresholds may be expired and removed from the reply cache.
    Type: Grant
    Filed: March 11, 2009
    Date of Patent: May 1, 2012
    Assignee: NetApp, Inc.
    Inventors: Jason L. Goldschmidt, Peter D. Shah, Thomas M. Talpey
  • Patent number: 8171229
    Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.
    Type: Grant
    Filed: October 4, 2010
    Date of Patent: May 1, 2012
    Assignee: Sap AG
    Inventor: Ivan Schreter
  • Patent number: 8166249
    Abstract: A method to perform a least recently used (LRU) algorithm for a co-processor is described, which co-processor in order to directly use instructions of a core processor and to directly access a main storage by virtual addresses of said core processor comprises a TLB for virtual to absolute address translations plus a dedicated memory storage also including said TLB, wherein said TLB consists of at least two zones which can be assigned in a flexible manner more than one at a time. Said method to perform a LRU algorithm is characterized in that one or more zones are replaced dependent on an actual compression service call (CMPSC) instruction.
    Type: Grant
    Filed: March 6, 2009
    Date of Patent: April 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Thomas Koehler, Siegmund Schlechter
  • Patent number: 8166229
    Abstract: In some embodiments, a non-volatile cache memory may include a multi-level non-volatile cache memory configured to be located between a system memory and a mass storage device of an electronic system and a controller coupled to the multi-level non-volatile cache memory, wherein the controller is configured to control utilization of the multi-level non-volatile cache memory. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: April 24, 2012
    Assignee: Intel Corporation
    Inventors: R. Scott Tetrick, Dale Juenemann, Robert Brennan
  • Patent number: 8166248
    Abstract: A system includes logic to cache at least one block in at least one cache if the block has a popularity that compares favorably to the popularity of other blocks in the cache, where the popularity of the block is determined by reads of the block from persistent storage and reads of the block from the cache.
    Type: Grant
    Filed: December 12, 2007
    Date of Patent: April 24, 2012
    Assignee: ARRIS Group, Inc.
    Inventors: Christopher A. Provenzano, Benedict J. Jackson, Michael N. Galassi, Carl H. Seaton
  • Publication number: 20120096226
    Abstract: A two-level replacement scheme is provided for selecting an entry in a cache memory to replace when a cache miss takes place and the memory is full. The scheme divides the tags associated with each memory location of the cache into two or more groups, each group relating to a subset of memory locations of the cache. The scheme uses a first algorithm to select one of the groups and passes the tags for the group through a second algorithm. The second algorithm produces a local index which, when combined with a group index, produces a replacement index that identifies a memory location in the cache to replace.
    Type: Application
    Filed: October 18, 2010
    Publication date: April 19, 2012
    Inventors: Stephen P. Thompson, Robert Krick, Tarun Nakra