Least Recently Used Patents (Class 711/136)
  • Patent number: 7502889
    Abstract: A home node aware replacement policy for a cache chooses to evict lines which belong to local memory over lines which belong to remote memory, reducing the average transaction cost of incorrect cache line replacements. With each entry, the cache stores a t-bit cost metric (t?1) representing a relative distance between said cache and an originating memory for the respective cache entry. Responsive to determining that no cache entry corresponds to an access request, the replacement policy selects a cache entry for eviction from the cache based at least in part on the t-bit cost metric. The selected cache entry is then evicted from the cache.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: March 10, 2009
    Assignee: Intel Corporation
    Inventor: Krishnakanth V. Sistla
  • Publication number: 20090063776
    Abstract: A cache memory system includes a cache memory and a block replacement controller. The cache memory may include a plurality of sets, each set including a plurality of block storage locations. The block replacement controller may maintain a separate count value corresponding to each set of the cache memory. The separate count value points to an eligible block storage location within the given set to store replacement data. The block replacement controller may maintain for each of at least some of the block storage locations, an associated recent access bit indicative of whether the corresponding block storage location was recently accessed. In addition, the block replacement controller may store the replacement data within the eligible block storage location pointed to by the separate count value depending upon whether a particular recent access bit indicates that the eligible block storage location was recently accessed.
    Type: Application
    Filed: September 4, 2007
    Publication date: March 5, 2009
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventor: James D. Williams
  • Publication number: 20090055594
    Abstract: A system for, method of and computer program product captures performance-characteristic data from the execution of a program and models system performance based on that data. Performance-characterization data based on easily captured reuse distance metrics is targeted, defined as the total number of memory references between two accesses to the same piece of data. Methods for efficiently capturing this kind of metrics are described. These data can be refined into easily interpreted performance metrics, such as performance data related to caches with LRU replacement and random replacement strategies in combination with fully associative as well as limited associativity cache organizations.
    Type: Application
    Filed: June 5, 2007
    Publication date: February 26, 2009
    Inventors: Erik Berg, Erik Hagersten, Mats Nilsson, Mikael Petterson, Magnus Vesterlund, Hakan Zeffer
  • Publication number: 20090043967
    Abstract: A system includes logic to cache at least one block in at least one cache if the block has a popularity that compares favorably to the popularity of other blocks in the cache, where the popularity of the block is determined by reads of the block from persistent storage and reads of the block from the cache.
    Type: Application
    Filed: December 12, 2007
    Publication date: February 12, 2009
    Applicant: Broadband Royalty Corporation
    Inventors: Christopher A. Provenzano, Benedict J. Jackson, Michael N. Galassi, Carl H. Seaton
  • Publication number: 20090037662
    Abstract: A mechanism for selectively disabling and enabling read caching based on past performance of the cache and current read/write requests. The system improves overall performance by using an autonomic algorithm to disable read caching for regions of backend disk storage (i.e., the backstore) that have had historically low cache hit ratios. The result is that more cache becomes available for workloads with larger hit ratios, and less time and machine cycles are spent searching the cache for data that is unlikely to be there.
    Type: Application
    Filed: July 30, 2007
    Publication date: February 5, 2009
    Inventors: Lee Charles La Frese, Joshua Douglas Martin, Justin Thomas Miller, Vernon Walter Miller, James Russell Thompson, Yan Xu, Olga Yiparaki
  • Publication number: 20090031084
    Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.
    Type: Application
    Filed: May 30, 2008
    Publication date: January 29, 2009
    Applicant: International Business Machines Corporation
    Inventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
  • Patent number: 7484044
    Abstract: A method and apparatus for cache coherency states is disclosed. In one embodiment, a cache accessible across two interfaces, an inner interface and an outer interface, may have a joint cache coherency state. The joint cache coherency state may have a first state for the inner interface and a second state for the outer interface, where the second state has higher privilege than the first state. In one embodiment this may promote speculative invalidation. In other embodiments this may reduce snoop transactions on the inner interface.
    Type: Grant
    Filed: September 12, 2003
    Date of Patent: January 27, 2009
    Assignee: Intel Corporation
    Inventors: Jeffrey D. Gilbert, Kai Cheng
  • Patent number: 7464246
    Abstract: A self-tuning, low overhead, simple to implement, locally adaptive, novel cache management policy that dynamically and adaptively partitions the cache space amongst sequential and random streams so as to reduce read misses.
    Type: Grant
    Filed: September 30, 2004
    Date of Patent: December 9, 2008
    Assignee: International Business Machines Corporation
    Inventors: Binny Sher Gill, Dharmendra Shantilal Modha
  • Patent number: 7457920
    Abstract: The proposed system and associated algorithm when implemented improves the processor cache miss rates and overall cache efficiency in multi-core environments in which multiple CPU's share a single cache structure (as an example). The cache efficiency will be improved by tracking CPU core loading patterns such as miss rate and minimum cache line load threshold levels. Using this information along with existing cache eviction method such as LRU, results in determining which cache line from which CPU is evicted from the shared cache when a capacity conflict arises. This methodology allows one to dynamically allocate shared cache entries to each core within the socket based on the particular core's frequency of shared cache usage.
    Type: Grant
    Filed: January 26, 2008
    Date of Patent: November 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: Marcus Lathan Kornegay, Ngan Ngoc Pham
  • Patent number: 7454573
    Abstract: A hardware based method for determining when to migrate cache lines to the cache bank closest to the requesting processor to avoid remote access penalty for future requests. In a preferred embodiment, decay counters are enhanced and used in determining the cost of retaining a line as opposed to replacing it while not losing the data. In one embodiment, a minimization of off-chip communication is sought; this may be particularly useful in a CMP environment.
    Type: Grant
    Filed: January 13, 2005
    Date of Patent: November 18, 2008
    Assignee: International Business Machines Corporation
    Inventors: Alper Buyuktosunoglu, Zhigang Hu, Jude A. Rivers, John T. Robinson, Xiaowei Shen, Vijayalakshmi Srinivasan
  • Patent number: 7437516
    Abstract: Methods for a treatment of cached objects are described. In one embodiment, management of a region of a cache is configured with an eviction policy plug-in. The eviction policy plug-in includes an eviction timing component and a sorting component, with the eviction timing component including code to implement an eviction timing method, and the eviction timing method to trigger eviction of an object from the region of cache. The sorting component includes code to implement a sorting method to identify an object that is eligible for eviction from said region of cache.
    Type: Grant
    Filed: December 28, 2004
    Date of Patent: October 14, 2008
    Assignee: SAP AG
    Inventors: Michael Wintergerst, Petio G. Petev
  • Patent number: 7437513
    Abstract: An improvement in performance and a reduction of power consumption in a cache memory can both be effectively realized by increasing or decreasing the number of operated ways in accordance with access patterns. A hit determination unit determines the hit way when a cache access hit occurs. A way number increase/decrease determination unit manages, for each of the ways that are in operation, the order from the way for which the time of use is most recent to the way for which the time of use is oldest. The way number increase/decrease determination unit then finds the rank of the hit ways that have been obtained in the hit determination unit and counts the number of hits for each rank in the order. The way number increase/decrease determination unit further determines increase or decrease of the number of operated ways based on the access pattern that is indicated by the relation of the number of hits to each rank in the order.
    Type: Grant
    Filed: April 28, 2005
    Date of Patent: October 14, 2008
    Assignees: NEC Corporation
    Inventors: Yasumasa Saida, Hiroaki Kobayashi
  • Publication number: 20080250200
    Abstract: Provided are a method, system, and program for destaging a track from cache to a storage device. The destaged track is retained in the cache. Verification is made of whether the storage device successfully completed writing data. Indication is made of destaged tracks eligible for removal from the cache that were destaged before the storage device is verified in response to verifying that the storage device is successfully completing the writing of data.
    Type: Application
    Filed: June 18, 2008
    Publication date: October 9, 2008
    Applicant: International Business Machines Corporation
    Inventors: Thomas Charles Jarvis, Michael Howard Hartung, Karl Allen Nielsen, Jeremy Michael Pinson, Steven Robert Lowe
  • Patent number: 7434247
    Abstract: The desirability of programming events may be determined using metadata for programming events that includes goodness of fit scores associated with categories of a classification hierarchy one or more of descriptive data and keyword data. The programming events are ranked in accordance with the viewing preferences of viewers as expressed in one or more viewer profiles. The viewer profiles may each include preference scores associated with categories of the classification hierarchy and may also include one or more keywords. Ranking is performed through category matching and keyword matching using the contents of the metadata and the viewer profiles. The viewer profile keywords may be qualified keywords that are associated with specific categories of the classification hierarchy. The ranking may be performed such that qualified keyword matches generally rank higher than keyword matches, and keyword matches generally rank higher than category matches.
    Type: Grant
    Filed: March 28, 2005
    Date of Patent: October 7, 2008
    Assignee: Meevee, Inc.
    Inventors: Gil Gavriel Dudkiewicz, Dale Kittrick Hitt, Jonathan Percy Barker
  • Publication number: 20080244187
    Abstract: A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members.
    Type: Application
    Filed: May 9, 2008
    Publication date: October 2, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: ROBERT H. BELL, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Publication number: 20080189489
    Abstract: In one embodiment, a processor regularly writes one or more cache entries back to memory to reduce the likelihood of cache soft errors. The regularly occurring write backs operate independently of Least Recently Used (LRU) status of the entries so that all entries are flushed.
    Type: Application
    Filed: February 1, 2007
    Publication date: August 7, 2008
    Applicant: CISCO TECHNOLOGY, INC.
    Inventor: Somnath Mitra
  • Publication number: 20080183969
    Abstract: A self-tuning, low overhead, simple to implements locally adaptive, novel cache management policy that dynamically and adaptively partitions the cache space amongst sequential and random streams so as to reduce read misses.
    Type: Application
    Filed: April 2, 2008
    Publication date: July 31, 2008
    Inventors: Binny Sher Gill, Dharmendra Shantilal Modha
  • Patent number: 7406568
    Abstract: A technique to store a plurality of addresses and data to address and data buffers, respectively, in an ordered manner. More particularly, one embodiment of the invention stores a plurality of addresses to a plurality of address buffer entries and a plurality of data to a plurality of data buffer entries according to a true least-recently-used (LRU) allocation algorithm.
    Type: Grant
    Filed: June 20, 2005
    Date of Patent: July 29, 2008
    Assignee: Intel Corporation
    Inventor: Benjamin Tsien
  • Patent number: 7406512
    Abstract: A method and apparatus for the automatic migration of data via a distributed computer network allows a customer to select content files that are to be transferred to a group of edge servers. Origin sites store all of a customer's available content files. An edge server maintains a dynamic number of popular files in its memory for the customer. The files are ranked from most popular to least popular and when a file has been requested from an edge server a sufficient number of times to become more popular than the lowest popular stored file, the file is obtained from an origin site. The edge servers are grouped into two service levels: regional and global. The customer is charged a higher fee to store its popular files on the global edge servers compared to a regional set of edge servers because of greater coverage.
    Type: Grant
    Filed: November 22, 2006
    Date of Patent: July 29, 2008
    Assignee: Akamai Technologies, Inc.
    Inventors: Eric Sven-Johan Swildens, Maurice Cinquini, Amol Chavarkar, Anshu Agarwal
  • Publication number: 20080177953
    Abstract: A method and apparatus for enabling protection of a particular member of a cache during LRU victim selection. LRU state array includes additional “protection” bits in addition to the state bits. The protection bits serve as a pointer to identify the location of the member of the congruence class that is to be protected. A protected member is not removed from the cache during standard LRU victim selection, unless that member is invalid. The protection bits are pipelined to MRU update logic, where they are used to generate an MRU vector. The particular member identified by the MRU vector (and pointer) is protected from selection as the next LRU victim, unless the member is Invalid. The make MRU operation affects only the lower level LRU state bits arranged a tree-based structure and thus only negates the selection of the protected member, without affecting LRU victim selection of the other members.
    Type: Application
    Filed: December 6, 2007
    Publication date: July 24, 2008
    Inventors: ROBERT H. BELL, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 7401189
    Abstract: A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: July 15, 2008
    Assignee: International Business Machines Corporation
    Inventors: Robert H. Bell, Jr., Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Publication number: 20080168236
    Abstract: A method and system for improving the performance of a cache. The cache may include an array of tag entries where each tag entry includes an additional bit (“reused bit”) used to indicate whether its associated cache line has been reused, i.e., has been requested or referenced by the processor. By tracking whether a cache line has been reused, data (cache line) that may not be reused may be replaced with the new incoming cache line prior to replacing data (cache line) that may be reused. By replacing data in the cache memory that might not be reused prior to replacing data that might be reused, the cache hit may be improved thereby improving performance.
    Type: Application
    Filed: March 19, 2008
    Publication date: July 10, 2008
    Applicant: International Business Machines Corporation
    Inventors: Gordon T. Davis, Santiago A. Leon, Hans-Werner Tast
  • Patent number: 7398357
    Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.
    Type: Grant
    Filed: September 19, 2006
    Date of Patent: July 8, 2008
    Assignee: International Business Machines Corporation
    Inventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
  • Patent number: 7395373
    Abstract: Embodiments of a method for reducing conflict misses in a set-associative cache by mapping each memory address to a primary set and at least one overflow set are described. If a conflict miss occurs within the primary set, a cache line from the primary set is selected for replacement. However, rather than removing the selected cache line from the cache completely, the selected cache line may instead be relocated to the overflow set. The selected cache line replaces a cache line in the overflow set, if it is determined that the selected cache line from the primary set has an estimated age that is more recent than an estimated age for any cache line in the overflow set. Embodiments of the method incorporate various techniques for estimating the age of cache lines, and, particularly, for estimating the relative time since any given cache line was last accessed.
    Type: Grant
    Filed: September 20, 2005
    Date of Patent: July 1, 2008
    Assignee: International Business Machines Corporation
    Inventor: John T. Robinson
  • Publication number: 20080140939
    Abstract: A self-tuning, low overhead, simple to implement, locally adaptive, novel cache management policy that dynamically and adaptively partitions the cache space amongst sequential and random streams so as to reduce read misses.
    Type: Application
    Filed: February 18, 2008
    Publication date: June 12, 2008
    Inventors: Binny Sher Gill, Dharmendra Shantilal Modha
  • Publication number: 20080140940
    Abstract: A self-tuning, low overhead, simple to implement, locally adaptive, novel cache management policy that dynamically and adaptively partitions the cache space amongst sequential and random streams so as to reduce read misses.
    Type: Application
    Filed: February 19, 2008
    Publication date: June 12, 2008
    Inventors: Binny Sher Gill, Dharmendra Shantilal Modha
  • Patent number: 7386673
    Abstract: Embodiments of the present invention provide methods and systems for efficiently tracking evicted or non-resident pages. For each non-resident page, a first hash value is generated from the page's metadata, such as the page's mapping and offset parameters. This first hash value is then used as an index to point one of a plurality of circular buffers. Each circular buffer comprises an entry for a clock pointer and entries that uniquely represent non-resident pages. The clock pointer points to the next page that is suitable for replacement and moves through the circular buffer as pages are evicted. In some embodiments, the entries that uniquely represent non-resident pages are a hash value that is generated from the page's inode data.
    Type: Grant
    Filed: November 30, 2005
    Date of Patent: June 10, 2008
    Assignee: Red Hat, Inc.
    Inventor: Henri Han van Riel
  • Patent number: 7380047
    Abstract: A memory system and method includes a cache having a filtered portion and an unfiltered portion. The unfiltered portion is divided into block sized components, and the filtered portion is divided into sub-block sized components. Blocks evicted from the unfiltered portion have selected sub-blocks thereof cached in the filtered portion for servicing requests.
    Type: Grant
    Filed: September 30, 2004
    Date of Patent: May 27, 2008
    Assignee: International Business Machines Corporation
    Inventors: Philip George Emma, Allan Mark Hartstein, Thomas Roberts Puzak, Moinuddin Khalil Ahmed Qureshi
  • Publication number: 20080120471
    Abstract: A method and apparatus for replacement in a least-recently-used strategies is disclosed. An exemplary embodiment of the replacement strategy presented herein is a replacement strategy for set associative caches. The method and apparatus stores a priority level to determine which block frame is to be selected for replacement. Due to its simplicity, the disclosed approach and apparatus enables small implementations and is easily scalable. Consequently, the present method and apparatus is highly desirable for implementations of area critical applications.
    Type: Application
    Filed: November 6, 2007
    Publication date: May 22, 2008
    Applicant: ON DEMAND MICROELECTRONICS
    Inventor: Florian Blaschegg
  • Patent number: 7363430
    Abstract: A system may include M cache entries, each of the M cache entries to transmit a signal indicating a read from or a write to the cache entry and comprising a data register and a memory address register, and K layers of decision cells, where K=log2M. The K layers M/2 decision cells of a first layer to indicate the other one of the respective two of the M cache entries and to transmit a hit signal in response to the signal, a second layer of M/4 decision cells to enable the other one of the respective two of the M/2 decision cells of the first layer and transmit a second hit signal in response to the signal, a (K?1)th layer of two decision cells to enable the other one of the respective two decision cells of the (K?2)th layer and transmit a third hit signal in response to the second hit signal, and a Kth layer of a root decision cell to enable the other one of the respective two decision cells of the (K?1)th layer in response to the third hit signal.
    Type: Grant
    Filed: April 6, 2005
    Date of Patent: April 22, 2008
    Assignee: Intel Corporation
    Inventors: Samie B. Samaan, Avinash Sodani
  • Patent number: 7360042
    Abstract: Items that are in use are maintained in a used item store. Items that are no longer in use are placed in an unused items store. When an item that is not currently in use is requested again, an attempt is made to retrieve the item from the unused item store. Retrieving the item from the unused items store can save a tremendous amount of time since the object does not need to be recalculated again when it is requested. Items may be evicted from the unused item store based on the system resources available. When it has been determined that an item(s) should be evicted, an eviction score is calculated for each unused item. Items are then evicted from the unused item store based on their eviction score. Generally items that are larger in size, took less time to calculate, have not been accessed as frequently, and have not been referenced recently, are the first ones to be evicted from unused items store.
    Type: Grant
    Filed: December 20, 2004
    Date of Patent: April 15, 2008
    Assignee: Microsoft Corporation
    Inventors: Boaz Chen, Liviu Asnash, Shahar Prish, Silvio Susskind
  • Patent number: 7360043
    Abstract: One embodiment of the present invention provides a system that manages an LRU list such that the rank, or position, of data records in the sequence can be determined efficiently. The system initializes an index field in each record to the record's initial rank. When a record is accessed, the system moves it to the beginning of the LRU list and appends the value of the record's index field to a “change list.” The system then sets the record's index field to zero. The change list effectively tracks the records accessed since initialization, and combined with the records' index fields can be used to efficiently compute the rank of any record in the list. This ability to efficiently compute the rank of the data record in the LRU list reduces the frequency with which the computationally-expensive initialization operation must be executed on the LRU list.
    Type: Grant
    Filed: August 17, 2005
    Date of Patent: April 15, 2008
    Assignee: Sun Microsystems, Inc
    Inventor: Jan L. Bonebakker
  • Publication number: 20080086596
    Abstract: A single unified level one instruction cache in which some lines may contain traces and other lines in the same congruence class may contain blocks of instructions consistent with conventional cache lines. A mechanism is described for indexing into the cache, and selecting the desired line. Control is exercised over which lines are contained within the cache. Provision is made for selection between a trace line and a conventional line when both match during a tag compare step.
    Type: Application
    Filed: October 4, 2006
    Publication date: April 10, 2008
    Inventors: Gordon T. Davis, Richard W. Doing, John D. Jabusch, M. V. V. Anil Krishna, Brett Olsson, Eric F. Robinson, Sumedh W. Sathaye, Jeffrey R. Summers
  • Publication number: 20080086597
    Abstract: A single unified level one instruction(s) cache in which some lines may contain traces and other lines in the same congruence class may contain blocks of instruction(s) consistent with conventional cache lines. Formation of trace lines in the cache is delayed on initial operation of the system to assure quality of the trace lines stored.
    Type: Application
    Filed: October 5, 2006
    Publication date: April 10, 2008
    Inventors: Gordon T. Davis, Richard W. Doing, John D. Jabusch, M. V. V. Anil Krishna, Brett Olsson, Eric F. Robinson, Sumedh W. Sathaye, Jeffrey R. Summers
  • Patent number: 7356651
    Abstract: A method and system directed to improve effectiveness and efficiency of cache and data management by differentiating data based on certain attributes associated with the data and reducing the bottleneck to storage. The data-aware cache differentiates and manages data using a state machine having certain states. The data-aware cache may use data pattern and traffic statistics to retain frequently used data in cache longer by transitioning it into Sticky or StickyDirty states. The data-aware cache may also use content or application related attributes to differentiate and retain certain data in cache longer. Further, the data-aware cache may provide cache status and statistics information to a data-aware data flow manager, thus assisting data-aware data flow manager to determine which data to cache and which data to pipe directly through, or to switch cache policies dynamically, thus avoiding some of the overhead associated with caches.
    Type: Grant
    Filed: January 31, 2005
    Date of Patent: April 8, 2008
    Assignee: Piurata Technologies, LLC
    Inventors: Wei Liu, Steven H. Kahle
  • Patent number: 7356650
    Abstract: Systems and methods are provided for a data processing system and a cache arrangement. The data processing system includes at least one processor, a first-level cache, a second-level cache, and a memory arrangement. The first-level cache bypasses storing data for a memory request when a do-not-cache attribute is associated with the memory request. The second-level cache stores the data for the memory request. The second-level cache also bypasses updating of least-recently-used indicators of the second-level cache when the do-not-cache attribute is associated with the memory request.
    Type: Grant
    Filed: June 17, 2005
    Date of Patent: April 8, 2008
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, James A. Williams
  • Publication number: 20080082754
    Abstract: A caching mechanism implementing a “soft” Instruction-Most Recently Used (I-MRU) protection scheme whereby the selected I-MRU member (cache line) is only protected for a limited number of eviction cycles unless that member is updated/utilized during the period. An update or access to the instruction restarts the countdown that determines when the cache line is no longer protected as the I-MRU. Accordingly, only frequently used Instruction lines are protected, and old I-MRU lines age out of the cache. The old I-MRU members are evicted, such that all the members of a congruence class may be used for data. The I-MRU aging is accomplished through a counter or a linear feedback shift register (LFSR)-based “shootdown” of I-MRU cache lines. The LFSR is tuned such that an I-MRU line will be protected for a pre-established number of evictions.
    Type: Application
    Filed: October 3, 2006
    Publication date: April 3, 2008
    Inventors: Robert H. Bell, Jeffrey A. Stuecheli
  • Publication number: 20080077742
    Abstract: A deterministic flushing of one or more storage data objects buffered within a storage data buffer to a storage medium involves a processing of a host data object including writing a storage data object corresponding to the host data object to the storage data buffer, and a flushing of the storage data object(s) buffered within the storage data buffer to the storage medium prior to or subsequent to (i.e., relative to) the writing of the storage data object corresponding to the host data object to the storage data buffer as a function of an occurrence determination of a storage data buffer flushing event. The deterministic flushing further involves a queuing of a host data buffer meta-data update request for later processing.
    Type: Application
    Filed: September 22, 2006
    Publication date: March 27, 2008
    Applicant: International Business Machines Corporation
    Inventors: Lyn L. Ashton, Edward A. Baker, Stanley M. Kissinger, William McEwen, Sean P. McMillen, Michael R. Noel, Glenn R. Wilcock
  • Publication number: 20080065834
    Abstract: A computer system with the means to identify based on the instruction being decoded that the operand data that this instruction will access by its nature will not have locality of access and should be installed in the cache in such a way that each successive line brought into the data cache that hits the same congruence class should be placed in the same set as to not disturb the locality of the data that resided in the cache prior to the execution of the instruction that accessed the data that will not have locality of access.
    Type: Application
    Filed: September 13, 2006
    Publication date: March 13, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mark A. Check, Jennifer A. Navarro, Charles F. Webb
  • Patent number: 7343455
    Abstract: A method, apparatus, and computer for identifying selection of a bad victim during victim selection at a cache and recovering from such bad victim selection without causing the system to crash or suspend forward progress of the victim selection process. Among the bad victim selection addressed are recovery from selection of a deleted member and recovery from use of LRU state bits that do not map to a member within the congruence class. When LRU victim selection logic generates an output vector identifying a victim, the output vector is checked to ensure that it is a valid vector (non-null) and that it is not pointing to a deleted member. When the output vector is not valid or points to a deleted member, the LRU victim selection logic is triggered to re-start the victim selection process.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: March 11, 2008
    Assignee: International Business Machines Corporation
    Inventors: Robert H. Bell, Jr., Guy Lynn Guthrie, William John Starke
  • Patent number: 7337200
    Abstract: A storage sub-system employs a staging control information table by which staging of data to be read and redundant data thereof can be executed together to reduce response time in the event of a data read failure. The staging control information table also permits pre-read staging to be executed in the forward, backward or both the forward and backward directions, to reduce response time.
    Type: Grant
    Filed: March 21, 2005
    Date of Patent: February 26, 2008
    Assignee: Hitachi, Ltd.
    Inventors: Atsushi Ishikawa, Yoshiko Matsumoto, Kenichi Takamoto
  • Patent number: 7330938
    Abstract: System and method for a hybrid-cache. Data received from a data source is cached within a static cache as stable data. The static cache is a cache having a fixed size. Portions of the stable data within the static cache are evicted to a dynamic cache when the static cache becomes full. The dynamic cache is a cache having a dynamic size. The evicted portions of the stable cache are enrolled into the dynamic cache as soft data.
    Type: Grant
    Filed: May 18, 2004
    Date of Patent: February 12, 2008
    Assignee: SAP AG
    Inventors: IIiyan N. Nenov, Panayot M. Dobrikov
  • Patent number: 7330935
    Abstract: A cache system comprises i (e.g., 2) groups of m (e.g., 2) ways and n (e.g., 2) sets of cache arrays, a set address decoder, a comparator, a cache address and cache management information. The set address decoder selects all or one of the i groups of cache arrays based on the cache address and cache management information, and selects a j-th set in the selected cache memories according to the cache address.
    Type: Grant
    Filed: March 30, 2005
    Date of Patent: February 12, 2008
    Assignee: NEC Corporation
    Inventor: Shinya Yamazaki
  • Patent number: 7321954
    Abstract: An LRU array and method for tracking the accessing of lines of an associative cache. The most recently accessed lines of the cache are identified in the table, and cache lines can be blocked from being replaced. The LRU array contains a data array having a row of data representing each line of the associative cache, having a common address portion. A first set of data for the cache line identifies the relative age of the cache line for each way with respect to every other way. A second set of data identifies whether a line of one of the ways is not to be replaced. For cache line replacement, the cache controller will select the least recently accessed line using contents of the LRU array, considering the value of the first set of data, as well as the value of the second set of data indicating whether or not a way is locked. Updates to the LRU occur after each pre-fetch or fetch of a line or when it replaces another line in the cache memory.
    Type: Grant
    Filed: August 11, 2004
    Date of Patent: January 22, 2008
    Assignee: International Business Machines Corporation
    Inventors: James N. Dieffenderfer, Richard W. Doing, Brian E. Frankel, Kenichi Tsuchiya
  • Patent number: 7321955
    Abstract: The storage control device of the present invention controls a plurality of storage devices. The storage control device comprises an LRU write-back unit writing back data stored in the cache memory of the storage control device into the plurality of storage devices by the LRU method, and a write-back schedule processing unit selecting a storage device with a small number of write-backs executed by the LRU write-back unit and writing back data into the selected storage device.
    Type: Grant
    Filed: September 8, 2004
    Date of Patent: January 22, 2008
    Assignee: Fujitsu Limited
    Inventor: Hideaki Ohmura
  • Publication number: 20080010415
    Abstract: Exemplary embodiments include a method for updating an Cache LRU tree including: receiving a new cache line; traversing the Cache LRU tree, the Cache LRU tree including a plurality of nodes; biasing a selection the victim line toward those lines with relatively low priorities from the plurality of lines; and replacing a cache line with a relatively low priority with the new cache line.
    Type: Application
    Filed: July 5, 2006
    Publication date: January 10, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron C. Sawdey, Steven P. VanderWiel
  • Patent number: 7318123
    Abstract: A memory controller controls a buffer which stores the most recently used addresses and associated data, but the data stored in the buffer is only a portion of a row of data (termed row head data) stored in main memory. In a memory access initiated by the CPU, both the buffer and main memory are accessed simultaneously. If the buffer contains the address requested, the buffer immediately begins to provide the associated row head data in a burst to the cache memory. Meanwhile, the same row address is activated in the main memory bank corresponding to the requested address found in the buffer. After the buffer provides the row head data, the remainder of the burst of requested data is provided by the main memory to the CPU.
    Type: Grant
    Filed: April 18, 2005
    Date of Patent: January 8, 2008
    Assignee: Mosaid Technologies Incorporated
    Inventor: Nagi Nassief Mekhiel
  • Patent number: 7315873
    Abstract: A technique for improving the efficiency of a loop detecting, reference counting storage reclamation program in a computer system. A depth value is maintained for data objects in a memory resource to indicate a distance from a global, live data object. A reference count is also maintained based on a number of objects pointing to each object. A particular object is processed by the storage reclamation program when another object that previously pointed to the particular object no longer points to it, e.g., because the object was deleted or reset to point to another object, and when the depth value of the another object is one less than the depth value of the particular object. If the particular object is determined to be live, its depth value, and the depth values of other objects it points to or “roots” are reset. If the particular object is dead, it is cleaned up.
    Type: Grant
    Filed: July 15, 2003
    Date of Patent: January 1, 2008
    Assignee: International Business Machines Corporation
    Inventor: Russell L. Lewis
  • Patent number: 7290081
    Abstract: A ROM patching apparatus for use in a data processing system that executes instruction code stored in the ROM. The ROM patching apparatus comprises: 1) a patch buffer for storing a first replacement cache line containing a first new instruction suitable for replacing at least a portion of the code in the ROM; 2) a lockable cache; 3) core processor logic operable to read from an associated memory a patch table containing a first table entry, the first table entry containing 1) the first new instruction and 2) a first patch address identifying a first patched ROM address of the at least a portion of the code in the ROM. The core processor logic loads the first new instruction from the patch table into the patch buffer, stores the first replacement cache line from the patch buffer into the lockable cache, and locks the first replacement cache line into the lockable cache.
    Type: Grant
    Filed: May 14, 2002
    Date of Patent: October 30, 2007
    Assignee: STMicroelectronics, Inc.
    Inventors: Sivagnanam Parthasarathy, Alessandro Risso
  • Patent number: 7284096
    Abstract: Systems and methods are provided for data caching. An exemplary method for data caching may include establishing a FIFO queue and a LRU queue in a cache memory. The method may further include establishing an auxiliary FIFO queue for addresses of cache lines that have been swapped-out to an external memory. The method may further include determining, if there is a cache miss for the requested data, if there is a hit for requested data in the auxiliary FIFO queue and, if so, swapping-in the requested data into the LRU queue, otherwise swapping-in the requested data into the FIFO queue.
    Type: Grant
    Filed: August 5, 2004
    Date of Patent: October 16, 2007
    Assignee: SAP AG
    Inventor: Ivan Schreter