Least Recently Used Patents (Class 711/136)
-
Publication number: 20110099333Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.Type: ApplicationFiled: October 20, 2010Publication date: April 28, 2011Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin
-
Publication number: 20110093654Abstract: A data processing apparatus 1 comprises data processing circuitry 2, a memory 8 for storing data and a cache memory 5 for storing cached data from the memory 8. The cache memory 5 is partitioned into cache segments 12 which may be individually placed in a power saving state by power supply circuitry 15 under control of power control circuitry 22. The number of segments which are active at any time may be dynamically adjusted in dependence upon operating requirements of the processor 2. An eviction selection mechanism 35 is provided to select evictable cached data for eviction from the cache. A cache compacting mechanism 40 is provided to evict evictable cached data from the cache and to store non-evictable cached data in fewer cache segments than were used to store the cached data prior to eviction of the evictable cached data.Type: ApplicationFiled: October 20, 2009Publication date: April 21, 2011Applicant: The regents of the University of MichiganInventors: David Andrew Roberts, Trevor Nigel Mudge, Thomas Friedric Wenisch
-
Patent number: 7930588Abstract: A method, system, and computer program product for managing modified metadata in a storage controller cache pursuant to a recovery action by a processor in communication with a memory device is provided. A count of modified metadata tracks for a storage rank is compared against a predetermined criterion. If the predetermined criterion is met, a storage volume having the storage rank is designated with a metadata invalidation flag to defer metadata invalidation of the modified metadata tracks until after the recovery action is performed.Type: GrantFiled: January 28, 2009Date of Patent: April 19, 2011Assignee: International Business Machines CorporationInventors: Lawrence Carter Blount, Lokesh Mohan Gupta, Carol Santich Melgren, Kenneth Wayne Todd
-
Publication number: 20110087845Abstract: The present disclosure generally relates to cache memory systems and/or techniques to identify dead cache blocks in cache memory systems. Example systems may include a cache memory that is accessible by a cache client. The cache memory may include a plurality of storage locations for a first cache block, with a most recently used position location in the cache memory. A cache controller may be configured to predict whether the first cache block stored in the cache memory is identified as a dead cache block based on a cache burst of the first cache block. The cache burst may comprise a first access of the first cache block by a cache client and any subsequent contiguous accesses of the first cache block following the first access by the cache client while the first cache block is in a most recently used position of the cache set.Type: ApplicationFiled: October 14, 2009Publication date: April 14, 2011Inventors: Doug Burger, Haiming Liu
-
Patent number: 7925849Abstract: A bus arbiter receives requests of initiators, and internally includes a page hit/miss determining unit with permissible determining function, a bank open/close determining unit with permissible determining function, and an LRU unit with permissible determining function. Regarding the priority of the request arbitration on the requests, the bank priority on the SDRAM is determined in the order of page hit, bank open, and LRU. Furthermore, each determining unit internally includes a permissible time determining unit, and processes, at top priority, the request of the initiator which the corresponding permissible time is below the count threshold value in the priority processing of the determining unit.Type: GrantFiled: May 16, 2008Date of Patent: April 12, 2011Assignee: Renesas Electronics CorporationInventor: Yuji Izumi
-
Patent number: 7925859Abstract: A three-tiered TLB architecture in a multithreading processor that concurrently executes multiple instruction threads is provided. A macro-TLB caches address translation information for memory pages for all the threads. A micro-TLB caches the translation information for a subset of the memory pages cached in the macro-TLB. A respective nano-TLB for each of the threads caches translation information only for the respective thread. The nano-TLBs also include replacement information to indicate which entries in the nano-TLB/micro-TLB hold recently used translation information for the respective thread. Based on the replacement information, recently used information is copied to the nano-TLB if evicted from the micro-TLB.Type: GrantFiled: June 30, 2009Date of Patent: April 12, 2011Assignee: MIPS Technologies, Inc.Inventors: Soumya Banerjee, Michael Gottlieb Jensen, Ryan C. Kinter
-
Patent number: 7925834Abstract: A method and apparatus for tracking temporal use associated with cache evictions to reduce allocations in a victim cache is disclosed. Access data for a number of sets of instructions in an instruction cache is tracked at least until the data for one or more of the sets reach a predetermined threshold condition. Determinations whether to allocate entry storage in the victim cache may be made responsive in part to the access data for sets reaching the predetermined threshold condition. A micro-operation can be inserted into the execution pipeline in part to synchronize the access data for all the sets. Upon retirement of the micro-operation from the execution pipeline, access data for the sets can be synchronized and/or any previously allocated entry storage in the victim cache can be invalidated.Type: GrantFiled: December 29, 2007Date of Patent: April 12, 2011Assignee: Intel CorporationInventors: Peter J. Smith, Mongkol Ekpanyapong, Harikrishna Baliga, Ilhyun Kim
-
Patent number: 7917700Abstract: A method and a cache control circuit for replacing a cache line using an alternate pseudo least-recently-used (PLRU) algorithm with a victim cache coherency state, and a design structure on which the subject cache control circuit resides are provided. When a requirement for replacement in a congruence class is identified, a first PLRU cache line for replacement and an alternate PLRU cache line for replacement in the congruence class are calculated. When the first PLRU cache line for replacement is in the victim cache coherency state, the alternate PLRU cache line is picked for use.Type: GrantFiled: October 25, 2007Date of Patent: March 29, 2011Assignee: International Business Machines CorporationInventors: John David Irish, Chad B. McBride, Jack Chris Randolph
-
Publication number: 20110072218Abstract: A processor is disclosed. The processor includes an execution core, a cache memory, and a prefetcher coupled to the cache memory. The prefetcher is configured to fetch a first cache line from a lower level memory and to load the cache line into the cache. The cache is further configured to designate the cache line as a most recently used (MRU) cache line responsive to the execution core asserting N demand requests for the cache line, wherein N is an integer greater than 1. The cache is configured to inhibit the cache line from being promoted to the MRU position if it receives fewer than N demand requests.Type: ApplicationFiled: September 24, 2009Publication date: March 24, 2011Inventors: Srilatha Manne, Steven K. Reinhardt, Lisa Hsu
-
Publication number: 20110055485Abstract: An apparatus for allocating entries in a set associative cache memory includes an array that provides a first pseudo-least-recently-used (PLRU) vector in response to a first allocation request from a first functional unit. The first PLRU vector specifies a first entry from a set of the cache memory specified by the first allocation request. The first vector is a tree of bits comprising a plurality of levels. Toggling logic receives the first vector and toggles predetermined bits thereof to generate a second PLRU vector in response to a second allocation request from a second functional unit generated concurrently with the first allocation request and specifying the same set of the cache memory specified by the first allocation request. The second vector specifies a second entry different from the first entry from the same set. The predetermined bits comprise bits of a predetermined one of the levels of the tree.Type: ApplicationFiled: July 6, 2010Publication date: March 3, 2011Applicant: VIA TECHNOLOGIES, INC.Inventors: Colin Eddy, Rodney E. Hooker
-
Publication number: 20110055482Abstract: Various example embodiments are disclosed. According to an example embodiment, a shared cache may be configured to determine whether a word requested by one of the L1 caches is currently stored in the L2 shared cache, read the requested word from the main memory based on determining that the requested word is not currently stored in the L2 shared cache, determine whether at least one line in a way reserved for the requesting L1 cache is unused, store the requested word in the at least one line based on determining that the at least one line in the reserved way is unused, and store the requested word in a line of the L2 shared cache outside the reserved way based on determining that the at least one line in the reserved way is not unused.Type: ApplicationFiled: November 25, 2009Publication date: March 3, 2011Applicant: Broadcom CorporationInventors: Kimming So, Binh Truong
-
Publication number: 20110022805Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.Type: ApplicationFiled: October 4, 2010Publication date: January 27, 2011Inventor: Ivan Schreter
-
Publication number: 20110010502Abstract: In an embodiment, a cache stores tags for cache blocks stored in the cache. Each tag may include an indication identifying which of two or more replacement policies supported by the cache is in use for the corresponding cache block, and a replacement record indicating the status of the corresponding cache block in the replacement policy. Requests may include a replacement attribute that identifies the desired replacement policy for the cache block accessed by the request. If the request is a miss in the cache, a cache block storage location may be allocated to store the corresponding cache block. The tag associated with the cache block storage location may be updated to include the indication of the desired replacement policy, and the cache may manage the block in accordance with the policy. For example, in an embodiment, the cache may support both an LRR and an LRU policy.Type: ApplicationFiled: July 10, 2009Publication date: January 13, 2011Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
-
Patent number: 7870341Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.Type: GrantFiled: May 30, 2008Date of Patent: January 11, 2011Assignee: International Business Machines CorporationInventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
-
Patent number: 7861041Abstract: A cache memory system includes a cache memory and a block replacement controller. The cache memory may include a plurality of sets, each set including a plurality of block storage locations. The block replacement controller may maintain a separate count value corresponding to each set of the cache memory. The separate count value points to an eligible block storage location within the given set to store replacement data. The block replacement controller may maintain for each of at least some of the block storage locations, an associated recent access bit indicative of whether the corresponding block storage location was recently accessed. In addition, the block replacement controller may store the replacement data within the eligible block storage location pointed to by the separate count value depending upon whether a particular recent access bit indicates that the eligible block storage location was recently accessed.Type: GrantFiled: September 4, 2007Date of Patent: December 28, 2010Assignee: Advanced Micro Devices, Inc.Inventor: James D Williams
-
Publication number: 20100325365Abstract: An improved sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system. The method may comprise selecting a cache sector to be replaced that is not the most recently used and that has the least amount of modified data. In the case in which there is a tie among cache sectors, the sector to be replaced may be the sector among such cache sectors with the least amount of valid data. In the case in which there is still a tie among cache sectors, the sector to be replaced may be randomly selected among such cache sectors. Unlike conventional sectored cache replacement algorithms, the improved algorithm implemented by the method and computer program product accounts for both hit rate and bus utilization.Type: ApplicationFiled: June 17, 2009Publication date: December 23, 2010Applicant: International Business Machines CorporationInventor: Daniel J. Colglazier
-
Patent number: 7856576Abstract: In one embodiment, a controller for an associative memory having n ways contains circuitry for sending a request to search an indexed location in each of the n ways for a tag, wherein the tag and an index that is used to denote the indexed location form a memory address. The controller also contains circuitry, responsive to the request, for sending a set of n validity values, each validity value indicating, for a respective way, whether the indexed location is a valid location or a defective location. Additionally, the controller contains circuitry for receiving a hit signal that indicates whether a match to the tag was found at any of the indexed locations, wherein no hit is ever received for a defective location.Type: GrantFiled: April 25, 2007Date of Patent: December 21, 2010Assignee: Hewlett-Packard Development Company, L.P.Inventors: Carson D. Henrion, Dan Robinson
-
Patent number: 7855974Abstract: This is invention comprises a method an apparatus for Infinite Network Packet Capture System (INPCS). The INPCS is a high performance data capture recorder capable of capturing and archiving all network traffic present on a single network or multiple networks. This device can be attached to Ethernet networks via copper or SX fiber via either a SPAN port (101) router configuration or via an optical splitter (102). By this method, multiple sources or network traffic including gigabit Ethernet switches (102) may provide parallelized data feeds to the capture appliance (104), effectively increasing collective data capture capacity. Multiple captured streams are merged into a consolidated time indexed capture stream to support asymmetrically routed network traffic as well as other merged streams for external consumption.Type: GrantFiled: December 16, 2005Date of Patent: December 21, 2010Assignee: Solera Networks, Inc.Inventors: Jeffery V. Merkey, Bryan W. Sparks
-
Patent number: 7856633Abstract: A method of partitioning a memory resource, associated with a multi-threaded processor, includes defining the memory resource to include first and second portions that are dedicated to the first and second threads respectively. A third portion of the memory resource is then designated as being shared between the first and second threads. Upon receipt of an information item, (e.g., a microinstruction associated with the first thread and to be stored in the memory resource), a history of Least Recently Used (LRU) portions is examined to identify a location in either the first or the third portion, but not the second portion, as being a least recently used portion. The second portion is excluded from this examination on account of being dedicated to the second thread. The information item is then stored within a location, within either the first or the third portion, identified as having been least recently used.Type: GrantFiled: March 24, 2000Date of Patent: December 21, 2010Assignee: Intel CorporationInventors: Chan W. Lee, Glenn Hinton, Robert Krick
-
Publication number: 20100318744Abstract: A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein.Type: ApplicationFiled: June 15, 2009Publication date: December 16, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lawrence Y. Chiu, Lokesh M. Gupta, Yu-Cheng Hsu
-
Publication number: 20100318745Abstract: This disclosure provides techniques for dynamic content caching and retrieval. For example, a computing device includes cache memory dedicated to temporarily caching data of one or more applications of the computing device. The computing device also includes storage memory to store data in response to requests by the applications. The storage memory may also temporarily cache data. Further, the computing device includes system software to represent to the applications of the computing device that the portions of the storage memory utilized to cache content are available to store data of the applications. In addition, the computing device includes application programming interfaces to provide content to a requesting application from a cache of the computing device and/or from a remote content source.Type: ApplicationFiled: June 16, 2009Publication date: December 16, 2010Applicant: Microsoft CorporationInventors: Graham A. Wheeler, David Abzarian, Todd L. Carpenter, Didier Coussemaeker, Nicolas Mai, Jian Lin, Severan Rault, Danny Lange, Femando P. Zandona, Joseph Futty
-
Publication number: 20100312970Abstract: The illustrative embodiments provide a method, apparatus, and computer program product for managing a number of cache lines in a cache. In one illustrative embodiment, it is determined whether activity on a memory bus in communication with the cache exceeds a threshold activity level. A least important cache line is located in the cache responsive to a determination that the threshold activity level is exceeded, wherein the least important cache line is located using a cache replacement scheme. It is determined whether the least important cache line is clean responsive to the determination that the threshold activity level is exceeded. The least important cache line is selected for replacement in the cache responsive to a determination that the least important cache line is clean.Type: ApplicationFiled: June 4, 2009Publication date: December 9, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Gordon B. Bell, Anil Krishna, Brian M. Rogers, Ken V. Vu
-
Patent number: 7844778Abstract: A method for replacing cached data is disclosed. The method in one aspect associates an importance value to each block of data in the cache. When a new entry needs to be stored in the cache, a cache block for replacing is selected based on the importance values associated with cache blocks. In another aspect, the importance values are set according to the hardware and/or software's knowledge of the memory access patterns. The method in one aspect may also include varying the importance value over time over different processing requirements.Type: GrantFiled: July 11, 2006Date of Patent: November 30, 2010Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Balaram Sinharoy, Robert W. Wisniewski
-
Patent number: 7844779Abstract: Determining and applying a cache replacement policy for a computer application running in a computer processing system is accomplished by receiving a processor core data request, adding bits on each cache line of a plurality of cache lines to identify a core ID of an at least one processor core that provides each cache line in a shared cache, allocating a tag table for each processor core, where the tag table keeps track of an index of processor core miss rates, and setting a threshold to define a level of cache usefulness, depending on whether or not the index of processor core miss rates exceeds the threshold. Checking the threshold and when the threshold is not exceeded, then a shared cache standard policy for cache replacement is applied. When the threshold is exceeded, then the cache line from the processor core running the application is evicted from the shared cache.Type: GrantFiled: December 13, 2007Date of Patent: November 30, 2010Assignee: International Business Machines CorporationInventors: Marcus L. Kornegay, Ngan N. Pham
-
Publication number: 20100293337Abstract: The disclosure is related to data storage systems having multiple cache and to management of cache activity in data storage systems having multiple cache. In a particular embodiment, a data storage device includes a volatile memory having a first read cache and a first write cache, a non-volatile memory having a second read cache and a second write cache and a controller coupled to the volatile memory and the non-volatile memory. The memory can be configured to selectively transfer read data from the first read cache to the second read cache based on a least recently used indicator of the read data and selectively transfer write data from the first write cache to the second write cache based on a least recently written indicator of the write data.Type: ApplicationFiled: May 13, 2009Publication date: November 18, 2010Applicant: SEAGATE TECHNOLOGY LLCInventors: Robert D. Murphy, Robert W. Dixon, Steven S. Williams
-
Publication number: 20100293338Abstract: A low priority queue can be configured to list low priority removal candidates to be removed from a cache, with the low priority removal candidates being sorted in an order of priority for removal. A high priority queue can be configured to list high priority removal candidates to be removed from the cache. In response to receiving a request for one or more candidates for removal from the cache, one or more high priority removal candidates from the high priority queue can be returned if the high priority queue lists any high priority removal candidates. If no more high priority removal candidates remain in the high priority queue, then one or more low priority removal candidates from the low priority queue can be returned in the order of priority for removal. Write-only latches can also be used during write operations in a cache lookup data structure.Type: ApplicationFiled: May 20, 2009Publication date: November 18, 2010Applicant: Microsoft CorporationInventors: Muralidhar Krishnaprasad, Sudhir Mohan Jorwekar, Sharique Muhammed, Subramanian Muralidhar, Anil K. Nori
-
Patent number: 7836257Abstract: A method for managing a cache operates in a data processing system with a system memory and a plurality of processing units (PUs). A first PU determines that one of a plurality of cache lines in a first cache of the first PU must be replaced with a first data block, and determines whether the first data block is a victim cache line from another one of the plurality of PUs. In the event the first data block is not a victim cache line from another one of the plurality of PUs, the first cache does not contain a cache line in coherency state invalid, and the first cache contains a cache line in coherency state moved, the first PU selects a cache line in coherency state moved, stores the first data block in the selected cache line and updates the coherency state of the first data block.Type: GrantFiled: December 19, 2007Date of Patent: November 16, 2010Assignee: International Business Machines CorpationInventors: Robert John Dorsey, Jason Alan Cox, Hien Minh Le, Richard Nicholas, Eric Francis Robinson, Thuong Quang Truong
-
Publication number: 20100287339Abstract: Associativity of a multi-core processor cache memory to a logical partition is managed and controlled by receiving a plurality of unique logical processing partition identifiers into registration of a multi-core processor, each identifier being associated with a logical processing partition on one or more cores of the multi-core processor; responsive to a shared cache memory miss, identifying a position in a cache directory for data associated with the address, the shared cache memory being multi-way set associative; associating a new cache line entry with the data and one of the registered unique logical processing partition identifiers; modifying the cache directory to reflect the association; and caching the data at the new cache line entry, wherein said shared cache memory is effectively shared on a line-by-line basis among said plurality of logical processing partitions of said multi-core processor.Type: ApplicationFiled: May 8, 2009Publication date: November 11, 2010Applicant: International Business Machines CorporationInventors: Bret Ronald Olszewski, Steven Wayne White
-
Publication number: 20100275044Abstract: Embodiments that that distribute replacement policy bits and operate the bits in cache memories, such as non-uniform cache access (NUCA) caches, are contemplated. An embodiment may comprise a computing device, such as a computer having multiple processors or multiple cores, which has cache memory elements coupled with the multiple processors or cores. The cache memory device may track usage of cache lines by using a number of bits. For example, a controller of the cache memory may manipulate bits as part of a pseudo least recently used (LRU) system. Some of the bits may be in a centralized area of the cache. Other bits of the pseudo LRU system may be distributed across the cache. Distributing the bits across the cache may enable the system to conserve additional power by turning off the distributed bits.Type: ApplicationFiled: April 24, 2009Publication date: October 28, 2010Applicant: International Business Machines CorporationInventors: Ganesh Balakrishnan, Anil Krishna
-
Publication number: 20100274974Abstract: A system and method for replacing data in a cache utilizes cache block validity information, which contains information that indicates that data in a cache block is no longer needed for processing, to maintain least recently used information of cache blocks in a cache set of the cache, identifies the least recently used cache block of the cache set using the least recently used information of the cache blocks in the cache set, and replaces data in the least recently used cache block of the cache set with data from main memory.Type: ApplicationFiled: April 24, 2009Publication date: October 28, 2010Applicant: NXP B.V.Inventors: JAN-WILLEM VAN DE WAERDT, JOHAN GERARD WILLEM MARIA JANSSEN, MAURICE PENNERS
-
Publication number: 20100268882Abstract: A system and method for tracking core load requests and providing arbitration and ordering of requests. When a core interface unit (CIU) receives a load operation from the processor core, a new entry in allocated in a queue of the CIU. In response to allocating the new entry in the queue, the CIU detects contention between the load request and another memory access request. In response to detecting contention, the load request may be suspended until the contention is resolved. Received load requests may be stored in the queue and tracked using a least recently used (LRU) mechanism. The load request may then be processed when the load request resides in a least recently used entry in the load request queue. CIU may also suspend issuing an instruction unless a read claim (RC) machine is available. In another embodiment, CIU may issue stored load requests in a specific priority order.Type: ApplicationFiled: April 15, 2009Publication date: October 21, 2010Applicant: International Business Machines CorporationInventors: Robert Alan Cargnoni, Guy Lynn Guthrie, Stephen James Powell, William John Starke, Jeffrey A. Stuecheli
-
Publication number: 20100257320Abstract: Techniques for replacing one or more blocks in a cache, the one or more blocks being associated with a plurality of data streams, are provided. The one or more blocks in the cache are grouped into one or more groups. Each group corresponding to one of the plurality of data streams. One or more incoming blocks are received. To free space, the one or more blocks of the one or more groups in the cache are invalidated in accordance with at least one of an inactivity of a given data stream corresponding to the one or more groups and a length of the one or more groups. The one or more incoming blocks are stored in the cache. A number of data streams maintained within the cache is maximized.Type: ApplicationFiled: April 7, 2009Publication date: October 7, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian Bass, Giora Biran, Hubertus Franke, Amit Golander, Hao Yu
-
Patent number: 7809891Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.Type: GrantFiled: April 9, 2007Date of Patent: October 5, 2010Assignee: SAP AGInventor: Ivan Schreter
-
Publication number: 20100250858Abstract: A computer-implemented method for controlling initialization of a fingerprint cache for data deduplication associated with a single-instance-storage computing subsystem may comprise: 1) detecting a request to store a data selection to the single-instance-storage computing subsystem, 2) leveraging a client-side fingerprint cache associated with a previous storage of the data selection to the single-instance-storage computing subsystem to initialize a new client-side fingerprint cache, and 3) utilizing the new client-side fingerprint cache for data deduplication associated with the request to store the data selection to the single-instance-storage computing subsystem. Other exemplary methods of controlling initialization of a fingerprint cache for data deduplication, as well as corresponding exemplary systems and computer-readable-storage media, are also disclosed.Type: ApplicationFiled: March 31, 2009Publication date: September 30, 2010Applicant: Symantec CorporationInventors: Nick Cremelie, Bastiaan Stougie
-
Publication number: 20100250833Abstract: A method and system to allow power fail-safe write-back or write-through caching of data in a persistent storage device into one or more cache lines of a caching device. No metadata associated with any of the cache lines is written atomically into the caching device when the data in the storage device is cached. As such, specialized cache hardware to allow atomic writing of metadata during the caching of data is not required.Type: ApplicationFiled: March 30, 2009Publication date: September 30, 2010Inventor: Sanjeev N. Trika
-
Patent number: 7805574Abstract: A caching mechanism implementing a “soft” Instruction-Most Recently Used (I-MRU) protection scheme whereby the selected I-MRU member (cache line) is only protected for a limited number of eviction cycles unless that member is updated/utilized during the period. An update or access to the instruction restarts the countdown that determines when the cache line is no longer protected as the I-MRU. Accordingly, only frequently used Instruction lines are protected, and old I-MRU lines age out of the cache. The old I-MRU members are evicted, such that all the members of a congruence class may be used for data. The I-MRU aging is accomplished through a counter or a linear feedback shift register (LFSR)-based “shootdown” of I-MRU cache lines. The LFSR is tuned such that an I-MRU line will be protected for a pre-established number of evictions.Type: GrantFiled: October 3, 2006Date of Patent: September 28, 2010Assignee: International Business Machines CorporationInventors: Robert H. Bell, Jr., Jeffrey A. Stuecheli
-
Patent number: 7802057Abstract: A method and apparatus for is herein described providing priority aware and consumption guided dynamic probabilistic allocation for a cache memory. Utilization of a sample size of a cache memory is measured for each priority level of a computer system. Allocation probabilities for each priority level are updated based on the measured consumption/utilization, i.e. allocation is reduced for priority levels consuming too much of the cache and allocation is increased for priority levels consuming too little of the cache. In response to an allocation request, it is assigned a priority level. An allocation probability associated with the priority level is compared with a randomly generated number. If the number is less than the allocation probability, then a fill to the cache is performed normally. In contrast, a spatially or temporally limited fill is performed if the random number is greater than the allocation probability.Type: GrantFiled: December 27, 2007Date of Patent: September 21, 2010Assignee: Intel CorporationInventors: Ravishankar Iyer, Ramesh Milekal, Donald Newell, Li Zhao
-
Publication number: 20100235585Abstract: System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable.Type: ApplicationFiled: October 30, 2009Publication date: September 16, 2010Applicant: AT&T MOBILITY II LLCInventor: Sangar Dowlatkhah
-
Publication number: 20100217937Abstract: A data processing apparatus is described which comprises a processor operable to execute a sequence of instructions and a cache memory having a plurality of cache lines operable to store data values for access by the processor when executing the sequence of instructions. A cache controller is also provided which comprises preload circuitry operable in response to a streaming preload instruction received at the processor to store data values from a main memory into one or more cache lines of the cache memory. The cache controller also comprises identification circuitry operable in response to the streaming preload instruction to identify one or more cache lines of the cache memory for preferential reuse.Type: ApplicationFiled: February 20, 2009Publication date: August 26, 2010Applicant: ARM LIMITEDInventors: Dominic Hugo Symes, Jonathan Sean Callan, Hedley James Francis, Paul Gilbert Meyer
-
Publication number: 20100211731Abstract: Methods, systems, and computer programs for managing storage in a computer system using a solid state drive (SSD) read cache memory are presented. The method includes receiving a read request, which causes a miss in a cache memory. After the cache miss, the method determines whether the data to satisfy the read request is available in the SSD memory. If the data is in SSD memory, the read request is served from the SSD memory. Otherwise, SSD memory tracking logic is invoked and the read request is served from a hard disk drive (HDD). Additionally, the SSD memory tracking logic monitors access requests to pages in memory, and if a predefined criteria is met for a certain page in memory, then the page is loaded in the SSD. The use of the SSD as a read cache improves memory performance for random data reads.Type: ApplicationFiled: February 19, 2009Publication date: August 19, 2010Applicant: Adaptec, Inc.Inventors: Steffen Mittendorff, Dieter Massa
-
Publication number: 20100191916Abstract: A method and a system for utilizing less recently used (LRU) bits and presence bits in selecting cache-lines for eviction from a lower level cache in a processor-memory sub-system. A cache back invalidation (CBI) logic utilizes LRU bits to evict only cache-lines within a LRU group, following a cache miss in the lower level cache. In addition, the CBI logic uses presence bits to (a) indicate whether a cache-line in a lower level cache is also present in a higher level cache and (b) evict only cache-lines in the lower level cache that are not present in a corresponding higher level cache. However, when the lower level cache-line selected for eviction is also present in any higher level cache, CBI logic invalidates the cache-line in the higher level cache. The CBI logic appropriately updates the values of presence bits and LRU bits, following evictions and invalidations.Type: ApplicationFiled: January 23, 2009Publication date: July 29, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ganesh Balakrishnan, Anil Krishna
-
Publication number: 20100185816Abstract: A mechanism which allows pages of flash memory to be read directly into cache. The mechanism enables different cache line sizes for different cache levels in a cache hierarchy, and optionally, multiple line size support, simultaneously or as an initialization option, in the highest level (largest/slowest) cache. Such a mechanism improves performance and reduces cost for some applications.Type: ApplicationFiled: January 21, 2009Publication date: July 22, 2010Inventors: William F. Sauber, Mitchell Markow
-
Patent number: 7761665Abstract: The present invention provides a data processing apparatus and method for handling cache accesses. The data processing apparatus comprises a processing unit operable to issue a series of access requests, each access request having associated therewith an address of a data value to be accessed. Further, the data processing apparatus has an n-way set associative cache memory operable to store data values for access by the processing unit, each way of the cache memory comprising a plurality of cache lines, and each cache line being operable to store a plurality of data values. The cache memory further comprises for each way a TAG storage for storing, for each cache line of that way, a corresponding TAG value.Type: GrantFiled: May 23, 2005Date of Patent: July 20, 2010Assignee: ARM LimitedInventors: Gilles Eric Grandou, Philippe Jean-Pierre Raphalen
-
Patent number: 7761664Abstract: Systems and methods for multi-level exclusive caching using hints. Exemplary embodiments include a method for multi-level exclusive caching, the method including identifying a cache management protocol within a multi-level cache hierarchy having a plurality of caches, defining a hint protocol within the multi-level cache hierarchy, identifying deciding caches and non-deciding caches within the multi-level cache hierarchy and implementing the hint protocol in conjunction with the cache management protocol to decide which pages within the multi-level cache to retain and where to store the pages.Type: GrantFiled: April 13, 2007Date of Patent: July 20, 2010Assignee: International Business Machines CorporationInventor: Binny S. Gill
-
Patent number: 7747821Abstract: A compression device recognizes patterns of data and compressing the data, and sends the compressed data to a decompression device that identifies a cached version of the data to decompress the data. In this way, the compression device need not resend high bandwidth traffic over the network. Both the compression device and the decompression device cache the data in packets they receive. Each device has a disk, on which each device writes the data in the same order. The compression device looks for repetitions of any block of data between multiple packets or datagrams that are transmitted across the network. The compression device encodes the repeated blocks of data by replacing them with a pointer to a location on disk. The decompression device receives the pointer and replaces the pointer with the contents of the data block that it reads from its disk.Type: GrantFiled: April 17, 2009Date of Patent: June 29, 2010Assignee: Juniper Networks, Inc.Inventors: Amit P. Singh, Balraj Singh, Vanco Burzevski
-
Publication number: 20100153652Abstract: Embodiments disclosed herein provide a cache management system comprising a cache and a cache manager that can poll cached assets at different frequencies based on their relative activity status and independent of other applications. In one embodiment, the cache manager may maintain one or more lists, each corresponding to a polling layer associated with a particular polling schedule or frequency. Cached assets may be added to or removed from a list or they may be promoted or demoted to a different list, thereby changing their polling frequency. By polling less active files at a lower frequency than more active files, significant system resources can be saved, thereby increasing overall system speed and performance. Additionally, because a cache manager according to embodiments disclosed herein does not require detailed contextual information about the files that it is managing, such a cache manager can be easily implemented with any cache.Type: ApplicationFiled: December 9, 2009Publication date: June 17, 2010Applicant: Vignette CorporationInventors: David Thomas, Scott Wells
-
Publication number: 20100146213Abstract: A data cache processing method, system and a data cache apparatus. The method includes: configuring a node and a memory chunk corresponding to the node in a cache, the node storing a key of data, length of the data and a pointer pointing to the memory chunk, the memory chunk storing data; and performing cache processing for the data according to the node and the memory chunk corresponding to the node.Type: ApplicationFiled: February 18, 2010Publication date: June 10, 2010Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Xing Yao, Jian Mao, Ming Xie
-
Publication number: 20100146214Abstract: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.Type: ApplicationFiled: February 18, 2010Publication date: June 10, 2010Inventors: Takeki Osanai, Kimberly Fernsler
-
Patent number: 7725658Abstract: A system and appertaining method provide for pre-fetching records from a central data base to a local storage area in order to reduce delays associated with the data transfers. User patterns for requesting data records are analyzed and rules/strategies are generated that permit an optimal pre-fetching of the records based on the user patterns. The rules/strategies are implemented by a routine that pre-fetches the data records so that users have the records available to them when needed.Type: GrantFiled: November 29, 2005Date of Patent: May 25, 2010Assignee: Siemens AktiengesellschaftInventors: Martin Lang, Ernst Bartsch
-
Patent number: 7725661Abstract: Management of a Cache is provided by differentiating data base on attributes associated with the data and reducing storage bottlenecks. The Cache differentiates and manages data using a state machine with a plurality of states. The Cache may use data patterns and statistics to retain frequently used data in the cache longer. The Cache uses content or attributes to differentiate and retain data longer. Further, the Cache may provide status and statistics to a data flow manager that determines which data to cache and which data to pipe directly through, or to switch cache policies dynamically, thus avoiding some of the cache overhead. The Cache may also place clean and dirty data in separate states to enable more efficient Cache mirroring and flush.Type: GrantFiled: March 25, 2008Date of Patent: May 25, 2010Assignee: Plurata Technologies, Inc.Inventors: Wei Liu, Steven H. Kahle