Of The Least Frequently Used Type, E.g., With Individual Count Value, Etc. (epo) Patents (Class 711/E12.071)
  • Patent number: 8291169
    Abstract: A method of providing history based done logic includes receiving a cache line in a L2 cache; determining if the cache line has a history of access at least three times on a previous call into the L2 cache; providing the cache line directly to a processor if the history of access was less then the at least three times; and loading the cache line into an L1 cache if the history of access was the at least three times.
    Type: Grant
    Filed: May 28, 2009
    Date of Patent: October 16, 2012
    Assignee: International Business Machines Corporation
    Inventor: David A. Luick
  • Publication number: 20120254564
    Abstract: A mechanism is provided for managing a high speed memory. An index entry indicates a storage unit in the high speed memory. A corresponding non-free index is set for a different type of low speed memory. The indicated storage unit in the high speed memory is assigned to a corresponding low speed memory by including the index entry in the non-free index. The storage unit in the high speed memory is recovered by demoting the index entry from the non-free index. The mechanism acquires a margin performance loss corresponding to a respective non-free index in response to receipt of a demotion request. The margin performance loss represents a change in a processor read operation time caused by performing a demotion operation in a corresponding non-free index. The mechanism compares the margin performance losses of the respective non-free indexes and selecting a non-free index whose margin performance loss satisfies a demotion condition as a demotion index.
    Type: Application
    Filed: March 27, 2012
    Publication date: October 4, 2012
    Applicant: International Business Machines Corporation
    Inventors: Xue D. Gao, Chao Guang Li, Yang Liu, Yi Yang
  • Publication number: 20120254520
    Abstract: A swapping method performed using a data processing device, which includes a processor including a plurality of cores, the swapping method including searching for an empty page of a swap memory in response to the swap memory being connected to the data processing device, the search being performed by using at least one core of the plurality of cores, selecting a page to be swapped from a main memory of the data processing device, the selection being performed by using the at least one core by accessing a corresponding main memory list among a plurality of main memory lists, and swapping data of the page selected to be swapped to the empty page, the swapping being performed by using the at least one core.
    Type: Application
    Filed: April 4, 2012
    Publication date: October 4, 2012
    Inventors: Yang Woo Roh, Min Chan Kim, Joo Young Hwang
  • Patent number: 8239631
    Abstract: A system and method for replacing data in a cache utilizes cache block validity information, which contains information that indicates that data in a cache block is no longer needed for processing, to maintain least recently used information of cache blocks in a cache set of the cache, identifies the least recently used cache block of the cache set using the least recently used information of the cache blocks in the cache set, and replaces data in the least recently used cache block of the cache set with data from main memory.
    Type: Grant
    Filed: April 24, 2009
    Date of Patent: August 7, 2012
    Assignee: Entropic Communications, Inc.
    Inventors: Jan-Willem van de Waerdt, Johan Gerard Willem Maria Janssen, Maurice Penners
  • Publication number: 20120198187
    Abstract: Techniques for preserving memory affinity in a computer system is disclosed. In response to a request for memory access to a page within a memory affinity domain, a determination is made if the request is initiated by a processor associated with the memory affinity domain. If the request is not initiated by a processor associated with the memory affinity domain, a determination is made if there is a page ID match with an entry within a page migration tracking module associated with the memory affinity domain. If there is no page ID match, an entry is selected within the page migration tracking module to be updated with a new page ID and a new memory affinity ID. If there is a page ID match, then another determination is made whether or not there is a memory affinity ID match with the entry with the page ID field match. If there is no memory affinity ID match, the entry is updated with a new memory affinity ID; and if there is a memory affinity ID match, an access counter of the entry is incremented.
    Type: Application
    Filed: January 28, 2011
    Publication date: August 2, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mathew Accapadi, Robert H. Bell, JR., Men-Chow Chiang, Hong L. Hua
  • Publication number: 20120117299
    Abstract: Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced.
    Type: Application
    Filed: November 9, 2010
    Publication date: May 10, 2012
    Applicant: VMWARE, INC.
    Inventors: Carl A. WALDSPURGER, Rajesh VENKATASUBRAMANIAN, Alexander Thomas GARTHWAITE, Yury BASKAKOV, Puneet ZAROO
  • Publication number: 20120117329
    Abstract: Combination based LRU caching employs a mapping mechanism in an LRU cache separate from a set of LRU caches for storing the values used in the combinations. The mapping mechanism is used to track the valid combinations of the values in the LRU caches storing the values resulting in any given value being stored at most once. Through the addition of a byte pointer significantly more combinations may be tracked in the same amount of cache memory with full LRU semantics on both the values and combinations.
    Type: Application
    Filed: November 9, 2010
    Publication date: May 10, 2012
    Applicant: Microsoft Corporation
    Inventors: Jeffrey Anderson, David Lannoye
  • Patent number: 8171229
    Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.
    Type: Grant
    Filed: October 4, 2010
    Date of Patent: May 1, 2012
    Assignee: Sap AG
    Inventor: Ivan Schreter
  • Publication number: 20120084515
    Abstract: The present disclosure relates to a cache memory controller for controlling a set-associative cache memory, in which two or more blocks are arranged in the same set, the cache memory controller including a content modification status monitoring unit for monitoring whether some of the blocks arranged in the same set of the cache memory have been modified in contents, and a cache block replacing unit for replacing a block, which has not been modified in contents, if some of the blocks arranged in the same set have been modified in contents.
    Type: Application
    Filed: September 2, 2009
    Publication date: April 5, 2012
    Inventor: Gi Ho Park
  • Patent number: 8140782
    Abstract: Embodiments in accordance with the invention permit a virtualization application to interact with a SuperFetch feature of an operating system so that on creation of a virtualization layer the SuperFetch feature is provided the opportunity to act on the newly available file system objects of the virtualization layer. Further, when the virtualization layer is removed, embodiments in accordance with the invention remove the file system objects associated with the virtualization layer from utilization by the SuperFetch feature.
    Type: Grant
    Filed: April 2, 2008
    Date of Patent: March 20, 2012
    Assignee: Symantec Corporation
    Inventors: William E. Sobel, Randall Richards Cook
  • Patent number: 8108614
    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: January 31, 2012
    Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin
  • Patent number: 8082396
    Abstract: A method, apparatus, system, and signal-bearing medium that, in an embodiment, select a command to send to memory. In an embodiment, the oldest command in a write queue that does not collide with a conflict queue is sent to memory and added to the conflict queue if some or all of the following are true: all of the commands in the read queue collide with the conflict queue, any read command incoming from the processor does not collide with the write queue, the number of commands in the write queue is greater than a first threshold, and all commands in the conflict queue have been present for less than a second threshold. In an embodiment, a command does not collide with a queue if the command does not access the same cache line in memory as the commands in the queue. In this way, in an embodiment, write commands are sent to the memory at a time that reduces the impact on the performance of read commands.
    Type: Grant
    Filed: April 28, 2005
    Date of Patent: December 20, 2011
    Assignee: International Business Machines Corporation
    Inventors: Herman Lee Blackmon, Philip Rogers Hillier, III, Joseph Allen Kirscht, Brian T. Vanderpool
  • Publication number: 20110289277
    Abstract: The present invention obtains with high precision, in a storage system, the effect of additional installation or removal of cache memory, that is, the change of the cache hit rate and the performance of the storage system at that time. For achieving this, when executing normal cache control in the operational environment of the storage system, the cache hit rate when the cache memory capacity has changed is also obtained. Furthermore, with reference to the obtained cache hit rate, the peak performance of the storage system is obtained. Furthermore, with reference to the target performance, the cache memory and the number of disks and other resources that are additionally required are obtained.
    Type: Application
    Filed: March 30, 2009
    Publication date: November 24, 2011
    Inventors: Masanori Takada, Shuji Nakamura, Kentaro Shimada
  • Patent number: 8065488
    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.
    Type: Grant
    Filed: October 20, 2010
    Date of Patent: November 22, 2011
    Assignee: Intel Corporation
    Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin
  • Patent number: 8060718
    Abstract: A memory leveling system updates physical memory blocks, or blocks, to maintain generally even wear. The system maintains an update count for each block, incrementing a wear level count when the update count reaches a wear level threshold. The system compares a wear level of blocks to determine whether to update a block in place or move data on the block to a less-worn physical block. The system groups the blocks into wear level groups identified by a common wear level to identify blocks that are being worn at a faster or slower than average rate. If an empty block count of a least worn group drops below a threshold, the system moves data from one of the blocks in the least worn group to an empty block in a most worn group.
    Type: Grant
    Filed: June 20, 2006
    Date of Patent: November 15, 2011
    Assignee: International Business Machines
    Inventors: Richard Francis Freitas, Michael Anthony Ko, Norman Ken Ouchi
  • Publication number: 20110252201
    Abstract: A storage system, including: (a) a primary storage entity utilized for storing a data-set of the storage system; (b) a secondary storage entity utilized for backing-up the data within the primary storage entity; (c) a flushing management module adapted to identify within the primary storage entity two groups of dirty data blocks, each group is comprised of dirty data blocks which are arranged within the secondary storage entity in a successive sequence, and to further identify within the primary storage entity a further group of backed-up data blocks which are arranged within the secondary storage entity in a successive sequence intermediately in-between the two identified groups of dirty data blocks; and (d) said flushing management module is adapted to combine the group of backed-up data blocks together with the two identified groups of dirty data blocks to form a successive extended flush sequence and to destage it to the secondary storage entity.
    Type: Application
    Filed: March 29, 2011
    Publication date: October 13, 2011
    Applicant: KAMINARIO TECHNOLOGIES LTD.
    Inventors: Benny KOREN, Erez ZILBER, Avi KAPLAN, Shachar FIENBLIT, Guy KEREN, Eyal GORDON
  • Publication number: 20110238920
    Abstract: A microprocessor includes a cache memory and a data prefetcher. The data prefetcher detects a pattern of memory accesses within a first memory block and prefetch into the cache memory cache lines from the first memory block based on the pattern. The data prefetcher also observes a new memory access request to a second memory block. The data prefetcher also determines that the first memory block is virtually adjacent to the second memory block and that the pattern, when continued from the first memory block to the second memory block, predicts an access to a cache line implicated by the new request within the second memory block. The data prefetcher also responsively prefetches into the cache memory cache lines from the second memory block based on the pattern.
    Type: Application
    Filed: February 24, 2011
    Publication date: September 29, 2011
    Applicant: VIA Technologies, Inc.
    Inventors: Rodney E. Hooker, John Michael Greer
  • Publication number: 20110107041
    Abstract: A method is for executing n data updates in an IC Card which has memory pages supporting m erase operations per page, with m<n. The method includes the step of allocating a cyclic elementary file including N records, each record associated to a memory page of the IC Card, and the cyclic elementary file indexing a less recently updated record which is erased before writing data to be updated.
    Type: Application
    Filed: October 29, 2010
    Publication date: May 5, 2011
    Applicant: INCARD S.A.
    Inventors: Saverio DONATIELLO, Corrado Guidobaldi, Mariangela Rauccio
  • Publication number: 20110099333
    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.
    Type: Application
    Filed: October 20, 2010
    Publication date: April 28, 2011
    Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin
  • Publication number: 20110022805
    Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.
    Type: Application
    Filed: October 4, 2010
    Publication date: January 27, 2011
    Inventor: Ivan Schreter
  • Publication number: 20100293337
    Abstract: The disclosure is related to data storage systems having multiple cache and to management of cache activity in data storage systems having multiple cache. In a particular embodiment, a data storage device includes a volatile memory having a first read cache and a first write cache, a non-volatile memory having a second read cache and a second write cache and a controller coupled to the volatile memory and the non-volatile memory. The memory can be configured to selectively transfer read data from the first read cache to the second read cache based on a least recently used indicator of the read data and selectively transfer write data from the first write cache to the second write cache based on a least recently written indicator of the write data.
    Type: Application
    Filed: May 13, 2009
    Publication date: November 18, 2010
    Applicant: SEAGATE TECHNOLOGY LLC
    Inventors: Robert D. Murphy, Robert W. Dixon, Steven S. Williams
  • Publication number: 20100268882
    Abstract: A system and method for tracking core load requests and providing arbitration and ordering of requests. When a core interface unit (CIU) receives a load operation from the processor core, a new entry in allocated in a queue of the CIU. In response to allocating the new entry in the queue, the CIU detects contention between the load request and another memory access request. In response to detecting contention, the load request may be suspended until the contention is resolved. Received load requests may be stored in the queue and tracked using a least recently used (LRU) mechanism. The load request may then be processed when the load request resides in a least recently used entry in the load request queue. CIU may also suspend issuing an instruction unless a read claim (RC) machine is available. In another embodiment, CIU may issue stored load requests in a specific priority order.
    Type: Application
    Filed: April 15, 2009
    Publication date: October 21, 2010
    Applicant: International Business Machines Corporation
    Inventors: Robert Alan Cargnoni, Guy Lynn Guthrie, Stephen James Powell, William John Starke, Jeffrey A. Stuecheli
  • Publication number: 20100257320
    Abstract: Techniques for replacing one or more blocks in a cache, the one or more blocks being associated with a plurality of data streams, are provided. The one or more blocks in the cache are grouped into one or more groups. Each group corresponding to one of the plurality of data streams. One or more incoming blocks are received. To free space, the one or more blocks of the one or more groups in the cache are invalidated in accordance with at least one of an inactivity of a given data stream corresponding to the one or more groups and a length of the one or more groups. The one or more incoming blocks are stored in the cache. A number of data streams maintained within the cache is maximized.
    Type: Application
    Filed: April 7, 2009
    Publication date: October 7, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian Bass, Giora Biran, Hubertus Franke, Amit Golander, Hao Yu
  • Publication number: 20100217937
    Abstract: A data processing apparatus is described which comprises a processor operable to execute a sequence of instructions and a cache memory having a plurality of cache lines operable to store data values for access by the processor when executing the sequence of instructions. A cache controller is also provided which comprises preload circuitry operable in response to a streaming preload instruction received at the processor to store data values from a main memory into one or more cache lines of the cache memory. The cache controller also comprises identification circuitry operable in response to the streaming preload instruction to identify one or more cache lines of the cache memory for preferential reuse.
    Type: Application
    Filed: February 20, 2009
    Publication date: August 26, 2010
    Applicant: ARM LIMITED
    Inventors: Dominic Hugo Symes, Jonathan Sean Callan, Hedley James Francis, Paul Gilbert Meyer
  • Patent number: 7747821
    Abstract: A compression device recognizes patterns of data and compressing the data, and sends the compressed data to a decompression device that identifies a cached version of the data to decompress the data. In this way, the compression device need not resend high bandwidth traffic over the network. Both the compression device and the decompression device cache the data in packets they receive. Each device has a disk, on which each device writes the data in the same order. The compression device looks for repetitions of any block of data between multiple packets or datagrams that are transmitted across the network. The compression device encodes the repeated blocks of data by replacing them with a pointer to a location on disk. The decompression device receives the pointer and replaces the pointer with the contents of the data block that it reads from its disk.
    Type: Grant
    Filed: April 17, 2009
    Date of Patent: June 29, 2010
    Assignee: Juniper Networks, Inc.
    Inventors: Amit P. Singh, Balraj Singh, Vanco Burzevski
  • Publication number: 20100115183
    Abstract: Disclosed is a storage apparatus that extends endurance and reduces bit cost. A storage apparatus includes a controller and a semiconductor storage media that has a plurality of storage devices. The plurality of storage devices include a first storage device and a second storage device having an upper limit of an erase count of data smaller than that of the first storage device. Area conversion information includes correspondence of a first address to be specified as a data storage destination and a second address of an area in which data is to be stored. A rewrite frequency of stored data is recorded for each area.
    Type: Application
    Filed: December 18, 2008
    Publication date: May 6, 2010
    Inventors: Akihiko ARAKI, Yoshiki Kano, Sadahiro Sugimoto, Yusuke Nonaka
  • Publication number: 20090276588
    Abstract: Embodiments of the invention include first storage mediums having first storage characteristics for making up a first pool of capacity of a first tier of storage, and second storage mediums having second storage characteristics for making up a second pool of capacity of a second tier of storage. Free capacity of the first and second pools is shared between the first and second tiers of storage. When the first pool has an amount of free capacity available over a reserved amount of free capacity reserved for first tier data, a first quantity of second tier data is moved from the second tier to the first tier. In exemplary embodiments of the invention, the first and second storage mediums are contained within one or more thin provisioning storage systems, and data is moved between the first and second tiers by allocating thin provisioning chunks to the data being moved.
    Type: Application
    Filed: April 30, 2008
    Publication date: November 5, 2009
    Inventor: Atsushi Murase
  • Publication number: 20090182953
    Abstract: This is invention comprises a method and apparatus for Infinite Network Packet Capture System (INPCS). The INPCS is a high performance data capture recorder capable of capturing and archiving all network traffic present on a single network or multiple networks. This device can be attached to Ethernet networks via copper or SX fiber via either a SPAN port (101) router configuration or via an optical splitter (102). By this method, multiple sources or network traffic including gigabit Ethernet switches (102) may provide parallelized data feeds to the capture appliance (104), effectively increasing collective data capture capacity. Multiple captured streams are merged into a consolidated time indexed capture stream to support asymmetrically routed network traffic as well as other merged streams for external consumption.
    Type: Application
    Filed: April 1, 2009
    Publication date: July 16, 2009
    Applicant: SOLERA NETWORKS. INC.
    Inventors: JEFFREY V. MERKEY, BRYAN W. SPARKS
  • Publication number: 20090106497
    Abstract: An apparatus includes a processor which issues a plurality of commands including an identifier for classifying each of the commands, a cache memory which includes a plurality of ways to store a data corresponding to a command, wherein the cache memory includes a register to store the identifier, the register corresponding to at least one of the ways being fixed, the fixed way exclusively storing the data corresponding to the identifier during which the register stores the identifier, a replacement controller which selects a replacement way based on a predetermined replacement algorithm in case of a cache miss, and excludes the fixed way from a candidate of the replacement way when the register corresponding to the fixed way stores the identifier.
    Type: Application
    Filed: September 8, 2008
    Publication date: April 23, 2009
    Applicant: NEC CORPORATION
    Inventor: Koji Kobayashi
  • Publication number: 20090089509
    Abstract: Systems and methods for cache replacement monitoring (CRM) are provided. The system includes a monitored cache comprising a monitored cache line set, the monitored cache line set comprising at least one cache line capable of holding data of a monitored address; and a CRM mechanism operatively associated with the monitored cache. The CRM mechanism collects CRM information for the monitored address. The method includes the steps of collecting CRM information for a monitored address in a monitored cache; and recording the CRM information for the monitored address, when at least one of (1) the monitored address is cached in the monitored cache, (2) the monitored address is replaced in the monitored cache, (3) any cache line in a cache line set corresponding to the monitored address is cached in the monitored cache, and (4) any cache line in a cache line set corresponding to the monitored address is replaced in the monitored cache.
    Type: Application
    Filed: June 9, 2008
    Publication date: April 2, 2009
    Inventors: XIAOWEI SHEN, Yefim Shuf, Peter F. Sweeney
  • Publication number: 20090055594
    Abstract: A system for, method of and computer program product captures performance-characteristic data from the execution of a program and models system performance based on that data. Performance-characterization data based on easily captured reuse distance metrics is targeted, defined as the total number of memory references between two accesses to the same piece of data. Methods for efficiently capturing this kind of metrics are described. These data can be refined into easily interpreted performance metrics, such as performance data related to caches with LRU replacement and random replacement strategies in combination with fully associative as well as limited associativity cache organizations.
    Type: Application
    Filed: June 5, 2007
    Publication date: February 26, 2009
    Inventors: Erik Berg, Erik Hagersten, Mats Nilsson, Mikael Petterson, Magnus Vesterlund, Hakan Zeffer
  • Publication number: 20090031084
    Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.
    Type: Application
    Filed: May 30, 2008
    Publication date: January 29, 2009
    Applicant: International Business Machines Corporation
    Inventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
  • Publication number: 20080270715
    Abstract: A secure memory device and method for obtaining and securely storing information relating to a life moment is disclosed. In the method, a parameter is received and inputted in a search heuristic. A search is made for the information according to the search heuristic and, upon finding the information, metadata is appended to the information. The information and metadata is then stored in a secure memory location. The secure memory location has a housing fabricated to withstand a predetermined stress, a detachable connection to a computer and a memory that stores the information and protects it from unauthorized deletion. In some embodiments, the stored information may be selectively deleted in a safe and controlled manner.
    Type: Application
    Filed: June 30, 2008
    Publication date: October 30, 2008
    Applicant: MICROSOFT CORPORATION
    Inventors: Aditha M. Adams, Adrian Mark Chandley, Carl J. Ledbetter, Dale Clark Crosier, Pasquale DeMaio, Steven T. Kaneko, Taryn K. Beck
  • Publication number: 20080177953
    Abstract: A method and apparatus for enabling protection of a particular member of a cache during LRU victim selection. LRU state array includes additional “protection” bits in addition to the state bits. The protection bits serve as a pointer to identify the location of the member of the congruence class that is to be protected. A protected member is not removed from the cache during standard LRU victim selection, unless that member is invalid. The protection bits are pipelined to MRU update logic, where they are used to generate an MRU vector. The particular member identified by the MRU vector (and pointer) is protected from selection as the next LRU victim, unless the member is Invalid. The make MRU operation affects only the lower level LRU state bits arranged a tree-based structure and thus only negates the selection of the protected member, without affecting LRU victim selection of the other members.
    Type: Application
    Filed: December 6, 2007
    Publication date: July 24, 2008
    Inventors: ROBERT H. BELL, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli