Combined Replacement Modes Patents (Class 711/134)
  • Patent number: 6671766
    Abstract: Each time a track is referenced, a value representing the last referenced age is entered for a track entry in a last referenced age table (LRAT). The last referenced age table is indexed by track. A second table, an age frequency table (AFT), counts all segments in use in each reference age. The AFT is indexed by the reference age of the tracks. When a track is referenced, the number of segments used for the track is added to a segment count associated with the last referenced age of the track. The segment count tallies the total number of segments in use for the reference age for all tracks referenced to that age. The number of segments used for the previous last referenced age of the track is subtracted from the segment count associated with the previous last referenced age in the AFT. When free_space is needed, tracks are discarded from the LRAT by reference age, the oldest first.
    Type: Grant
    Filed: January 7, 2000
    Date of Patent: December 30, 2003
    Assignee: Storage Technology Corporation
    Inventors: Henk Vandenbergh, Michael Steven Milillo, Gregory William Peterson
  • Patent number: 6671780
    Abstract: A modified least recently allocated cache enables a computer to use a modified least recently allocated cache block replacement policy. In a first embodiment, an indicator of the least recently allocated cache block is tracked. When a cache block is referenced, the referenced cache block is compared with the least recently allocated cache block indicator. If the two identify the same cache block, the least recently allocated cache block indicator is adjusted to identify a different cache block. This adjustment prevents the most recently referenced cache block from being replaced. In an alternative embodiment, the most recently referenced cache block is similarly tracked, but the least recently allocated cache block is not immediately adjusted. Only when a new cache block is to be a located are the least recently allocated cache block indicator and the most recently referenced cache block indicator compared.
    Type: Grant
    Filed: May 31, 2000
    Date of Patent: December 30, 2003
    Assignee: Intel Corporation
    Inventors: Shih-Lien L. Lu, Konrad Lai
  • Publication number: 20030236948
    Abstract: A cache way replacement technique to identify and replace a least-recently used cache way. A cache way replacement technique in which a least-recently used cache way is identified and replaced, such that the replacement of cache ways over time is substantially evenly distributed among a set of cache ways in a cache memory. A least-recently used cache way is identified in a cache memory having a non-binary number of cache ways.
    Type: Application
    Filed: June 25, 2002
    Publication date: December 25, 2003
    Inventors: Todd D. Erdner, Bradley G. Burgess, Heather L. Hanson
  • Publication number: 20030229761
    Abstract: A computer system is provided including a processor, a persistent storage device, and a main memory connected to the processor and the persistent storage device. The main memory includes a compressed cache for storing data retrieved from the persistent storage device after compression and an operating system. The operating system includes a plurality of interconnected software modules for accessing the persistent storage device and a filter driver interconnected between two of the plurality of software modules for managing memory capacity of the compressed cache and the buffer cache.
    Type: Application
    Filed: June 10, 2002
    Publication date: December 11, 2003
    Inventors: Sujoy Basu, Sumit Roy, Rajendra Kumar
  • Publication number: 20030229759
    Abstract: A method and system for processing Service Level Agreement (SLA) terms in a caching component in a storage system. The method can include monitoring cache performance for groups of data in the cache, each the group having a corresponding SLA. Overfunded SLAs can be identified according to the monitored cache performance. In consequence, an entry can be evicted from among one of the groups which correspond to an identified one of the overfunded SLAs. In one aspect of the present invention, the most overfunded SLA can be identified, and an entry can be evicted from among the group which corresponds to the most overfunded SLA.
    Type: Application
    Filed: June 5, 2002
    Publication date: December 11, 2003
    Applicant: International Business Machines Corporation
    Inventors: Ronald P. Doyle, David L. Kaminsky, David M. Ogle
  • Publication number: 20030229760
    Abstract: Storage-Assisted QoS. To provide storage-assisted QoS, a discriminatory storage system able to enforce a service discrimination policy within the storage system can include re-writable media; a storage system controller; a cache; and, a QoS enforcement processor configured to selectively evict entries in the cache according QoS terms propagated into the storage system through the storage system controller.
    Type: Application
    Filed: June 5, 2002
    Publication date: December 11, 2003
    Applicant: International Business Machines Corporation
    Inventors: Ronald P. Doyle, David L. Kaminsky, David M. Ogle
  • Publication number: 20030221069
    Abstract: A method and apparatus for increasing the processing speed of processors and increasing the data hit ratio is disclosed herein. The method increases the processing speed by providing a non-L1 instruction caching that uses prefetch to increase the hit ratio. Cache lines in a cache set are buffered, wherein the cache lines have a parameter indicating data selection characteristics associated with each buffered cache line. Then which buffered cache lines to cast out and/or invalidate is determined based upon the parameter indicating data selection characteristics.
    Type: Application
    Filed: May 22, 2002
    Publication date: November 27, 2003
    Applicant: International Business Machines Corporation
    Inventors: Michael Joseph Azevedo, Carol Spanel, Andrew Dale Walls
  • Patent number: 6654855
    Abstract: A time-weighted metric is associated with each line of data that is being held in a data cache. The value of the metric is recomputed as the lines are accessed and the metric value is used to group cache lines for paging purposes. The metrics are computed and stored and the stored metrics are maintained by linking the storage locations together in several linked lists that allow the metrics to be easily manipulated for updating purposes and for determining which metrics represent the most active cache lines. In particular, indices are maintained which identify linked lists of metrics with similar values. At regular predetermined time intervals, these indices are then used to assemble an ordered linked list of metrics corresponding to cache lines with similar metric values. This ordered list can be traversed in order to select cache lines for removal.
    Type: Grant
    Filed: March 12, 2001
    Date of Patent: November 25, 2003
    Assignee: EMC Corporation
    Inventors: Raju C. Bopardikar, Jack J. Stiffler
  • Patent number: 6654856
    Abstract: A system and method for managing a cache space employs a space allocation and recycling scheme that has very low complexity for each data caching transaction regardless of the size of the data set, is virtually fragmentation free, and does not depend on garbage collection. The cache space is treated as a linear space with its two ends connected in the manner of a cyclic queue. The reclaiming and allocation of cache space for writing new objects proceeds as an “allocation wave” that sweeps in a pre-selected direction over the “circular” cache space. As the allocation wave moves along the circular space, the space used by existing objects are reclaimed for writing new objects except for those existing objects that for some reason are not to be written over. Those existing objects to be passed over by the allocation wave are viewed as “interruptions” to the generally first-in-first-out (FIFO) allocation scheme for writing new objects into the circular cache space.
    Type: Grant
    Filed: May 15, 2001
    Date of Patent: November 25, 2003
    Assignee: Microsoft Corporation
    Inventor: Alexander Frank
  • Patent number: 6651143
    Abstract: An invalidation buffer is associated with each cache wherein either multiple processors and/or multiple caches maintain cache coherency. Rather than to decode the addresses and interrogate the cache directory to determine if data requested by an incoming command is in a cache, the invalidation buffer is quickly checked to determine if the data associated with the requested data has been recently invalidated. If so and if the command is not intended to replace the recently invalidated data, then the tag and data array of the cache are immediately bypassed to save precious processor time. If lower level caches maintain the same cache coherency and are accessed only through an adjacent cache, then those lower level caches may also be bypassed and a cache miss can be directed immediately to memory. In a multiprocessor system, such as NUMA, COMA, SMP, where other processors may access different cache levels independent of the adjacent cache level, then each invalidation buffer is checked.
    Type: Grant
    Filed: December 21, 2000
    Date of Patent: November 18, 2003
    Assignee: International Business Machines Corporation
    Inventor: Farnaz Mounes-Toussi
  • Patent number: 6643743
    Abstract: An apparatus and method for prefetching cache data in response to data requests. The prefetching uses the memory addresses of requested data to search for other data, from a related address, in a cache. This, or other data, may then be prefetched based on the result of the search.
    Type: Grant
    Filed: March 31, 2000
    Date of Patent: November 4, 2003
    Assignee: Intel Corporation
    Inventors: Herbert Hing-Jing Hum, Zohar Bogin
  • Publication number: 20030191898
    Abstract: A cache includes an error circuit for detecting errors in the replacement data. If an error is detected, the cache may update the replacement data to eliminate the error. For example, a predetermined, fixed value may be used for the update of the replacement data. Each of the cache entries corresponding to the replacement data may be represented in the fixed value. In one embodiment, the error circuit may detect errors in the replacement data using only the replacement data (e.g. no parity or ECC information may be used). In this manner, errors may be detected even in the presence of multiple bit errors which may not be detectable using parity/ECC checking.
    Type: Application
    Filed: April 10, 2003
    Publication date: October 9, 2003
    Applicant: Broadcom Corporation
    Inventor: Erik P. Supnet
  • Publication number: 20030191899
    Abstract: A method determines ion beam emittance, i.e., the beam current density based on position and angle, in a charged particle transport system. The emittance is determined from variations in the current measured in a slot Faraday or sample cup as a straight-edged mechanism traverses the beam upstream of the sample cup in a direction perpendicular to the orientation of the slot Faraday and the straight-edged mechanism, which also can be the direction in which the emittance is determined. An expression in terms of the beam current density can be determined for the derivative of the sample current with respect to position of the mechanism. Depending on the angular spread of the beam reaching the sample cup, the density can be determined directly from the derivative, or can be determined using a least squares analysis of the derivative over a range of mechanism positions.
    Type: Application
    Filed: March 21, 2002
    Publication date: October 9, 2003
    Inventor: Louis Edward Evans
  • Patent number: 6631446
    Abstract: Techniques for managing memory buffers include maintaining a pool of buffers and assigning the buffers to buffer classes based on the frequency with which information stored in the buffers is accessed. Different algorithms can be used to manage buffers assigned to the different classes. A determination can be made as to whether a particular buffer qualifies for entry into a particular one of the buffer classes based on a comparison between a threshold value and the frequency with which information stored in the particular buffer was accessed during a specified time interval. Additionally, the threshold value can be adjusted dynamically to take account, for example, of the current load on the system.
    Type: Grant
    Filed: October 26, 2000
    Date of Patent: October 7, 2003
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Cherkauer, Roger C. Raphael
  • Patent number: 6625695
    Abstract: A method for a cache line replacement policy enhancement to avoid memory page thrashing. The method of one embodiment comprises comparing a memory request address with cache tags to determine if any cache entry in set ‘n’ can match the address. The address is masked to determine if a thrash condition exists. Allocation to set ‘n’ is discouraged if a thrash condition is present.
    Type: Grant
    Filed: December 13, 2002
    Date of Patent: September 23, 2003
    Assignee: Intel Corporation
    Inventor: Blaise B. Fanning
  • Patent number: 6615363
    Abstract: An optical disk and method of recording information on the same are provided in which music and images such as, motion pictures can be recorded without dropout of information by more reducing the time necessary to replace than in the prior art. On this optical disk are provided a plurality of management areas, of which one or a plurality of the management areas are located between the innermost and outermost peripheries and are used for storing replacement information. Therefore, the optical head need not be moved to the innermost or outermost periphery at the time of replacement unlike the prior art.
    Type: Grant
    Filed: March 17, 2000
    Date of Patent: September 2, 2003
    Assignee: Hitachi Maxell, Ltd.
    Inventor: Minoru Fukasawa
  • Publication number: 20030163644
    Abstract: A sectioned ordered queue in an information handling system comprises a plurality of queue sections arranged in order from a first queue section to a last queue section. Each queue section contains one or more queue entries that correspond to available ranges of real storage locations and are arranged in order from a first queue entry to a last queue entry. Each queue section and each queue entry in the queue sections having a weight factor defined for it. Each queue entry has an effective weight factor formed by combining the weight factor defined for the queue section with the weight factor defined for the queue entry. A new entry is added to the last queue section to indicate a newly available corresponding storage location, and one or more queue entries are deleted from the first section of the queue to indicate that the corresponding storage locations are no longer available.
    Type: Application
    Filed: February 27, 2002
    Publication date: August 28, 2003
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tri M. Hoang, Tracy D. Butler, Danny R. Sutherland, David B. Emmes, Mariama Ndoye, Elpida Tzortzatos
  • Patent number: 6601143
    Abstract: A self-adapting method and apparatus for determining an efficient cache line replacement algorithm for selecting which objects (or lines) are to be evicted from the cache. Objects are prioritized based upon weights which are determined dynamically for each object. The object weights depend on a first attribute L1 for each cache object and a first control parameter P1 which determines the influence of the first attribute L1 on the weights. The hit rate of the cache memory is observed during a first interval of time while the control parameter is set to a first value. The control parameter is adjusted and the hit rate is observed during a second interval of time. The control parameter is then adjusted an incremental amount having a magnitude and direction determined based on whether the hit rate improved or was reduced.
    Type: Grant
    Filed: July 20, 2000
    Date of Patent: July 29, 2003
    Assignee: International Business Machines Corporation
    Inventor: Bernd Lamparter
  • Patent number: 6598121
    Abstract: A system and method for hierarchically caching objects includes one or more level 1 nodes, each including at least one level 1 cache; one or more level 2 nodes within which the objects are permanently stored or generated upon request, each level 2 node coupled to at least one of the one or more level 1 nodes and including one or more level 2 caches; and means for storing, in a coordinated manner, one or more objects in at least one level 1 cache and/or at least one level 2 cache, based on a set of one or more criteria.
    Type: Grant
    Filed: November 6, 2001
    Date of Patent: July 22, 2003
    Assignee: International Business Machines, Corp.
    Inventors: James R. H. Challenger, Paul Michael Dantzig, Daniel Manuel Dias, Arun Kwangil Iyengar, Eric M. Levy-Abegnoli
  • Patent number: 6594742
    Abstract: The invention features a method and a system for selecting a slot within a memory unit, e.g., cache, for removal. The memory unit is accessible to a plurality of processors, and each slot in the memory unit has a corresponding entry in an age table. Each time when a processor examines one of the entries, an age value of the entry is increased. When the age value is above a maturity age, the corresponding slot becomes a removable slot. Each processor also maintains statistics to estimate the number of removable slots in the memory unit. According to the statistics, adjusts a maturity age associated with the processor dynamically and independently to control the number of removable slots. Accordingly, the number removable slots can be maintained at a pre-determined percentage relative to the total number of slots in the memory unit.
    Type: Grant
    Filed: May 7, 2001
    Date of Patent: July 15, 2003
    Assignee: EMC Corporation
    Inventor: Josef Ezra
  • Patent number: 6591347
    Abstract: A dynamically configurable replacement technique in a unified or shared cache reduces domination by a particular functional unit or an application such as unified instruction/data caching by limiting the eviction ability to selected cache regions based on over utilization of the cache by a particular functional unit or application. A specific application includes a highly integrated multimedia processor employing a tightly coupled shared cache between central processing and graphics units wherein the eviction ability of the graphics unit is limited to selected cache regions when the graphics unit over utilizes the cache. Dynamic configurability can take the form of a programmable register that enables either one of a plurality of replacement modes based on captured statistics such as measurement of cache misses by a particular functional unit or application.
    Type: Grant
    Filed: October 9, 1998
    Date of Patent: July 8, 2003
    Assignee: National Semiconductor Corporation
    Inventors: Brett A. Tischler, Rajeev Jayavant
  • Patent number: 6584548
    Abstract: A data processing system comprising a cache memory, wherein a cache entry containing data is stored in the cache memory. A cache coordinator, wherein the cache coordinator invalidates one or more cache entries in response to a signal. An ID-based invalidation process, wherein a cache entry is associated with an ID that uniquely identifies the cache entry and can optionally be associated with one or more data ids that represent the underlying data contained in the cache entry, and the ID-based invalidation process sends a signal to the cache coordinator to invalidate all cache entries that either have that cache entry ID or have been associated with a data ID when the data that the ID represents changes. A time-limit-based invalidation process, wherein a cache entry can be associated with a time limit, and the time-limit-based invalidation process sends a signal to the cache coordinator to invalidate a cache entry whose time limit has expired.
    Type: Grant
    Filed: July 22, 1999
    Date of Patent: June 24, 2003
    Assignee: International Business Machines Corporation
    Inventors: Donald A. Bourne, Christopher Shane Claussen, George Prentice Copeland, Matthew Dale McClain
  • Patent number: 6571317
    Abstract: A cache includes an error circuit for detecting errors in the replacement data. If an error is detected, the cache may update the replacement data to eliminate the error. For example, a predetermined, fixed value may be used for the update of the replacement data. Each of the cache entries corresponding to the replacement data may be represented in the fixed value. In one embodiment, the error circuit may detect errors in the replacement data using only the replacement data (e.g. no parity or ECC information may be used). In this manner, errors may be detected even in the presence of multiple bit errors which may not be detectable using parity/ECC checking.
    Type: Grant
    Filed: May 1, 2001
    Date of Patent: May 27, 2003
    Assignee: Broadcom Corporation
    Inventor: Erik P. Supnet
  • Patent number: 6560677
    Abstract: Ways of a cache memory system are designated as being in one of three subsets: a normal subset, a transient subset, and a locked subset. The designation of the respective subsets is provided by a normal subset floor index, a transient subset floor index, and a transient subset ceiling index. The respective indexes are used to select the subset into which new entries are copied from main memory as a result of a cache miss. If the new entry is designated as being characterized by normal program behavior, it is copied into the normal subset in the cache. If the new entry is designated as being characterized by transient program behavior, it is copied into the transient subset in the cache. The relationship between the normal subset and the transient subset is programmable. For example, the normal and the transient subsets may include at least one common way of the cache memory or the transient subset may be completely included in the normal subset or completely separate therefrom.
    Type: Grant
    Filed: May 4, 1999
    Date of Patent: May 6, 2003
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey Todd Bridges, Thomas Andrew Sartorius
  • Publication number: 20030079087
    Abstract: A cache memory control unit and a cache memory control' method according to the present invention avoids a problem that, when the access frequency of one host is low and the access frequency of another host is high, frequently accessed data pages out less frequently accessed data. A controller includes a function to allocate, in the cache memory, individual cache pages to each access type and to allocate common cache pages regardless of the access type, a function to execute LRU control for each of the individual cache pages and the common cache pages, and a function to load data, which is paged out from the individual cache pages, into the common cache pages. The access type is classified according to a port via which access is made.
    Type: Application
    Filed: October 15, 2002
    Publication date: April 24, 2003
    Applicant: NEC CORPORATION
    Inventor: Atsushi Kuwata
  • Patent number: 6546473
    Abstract: A method for determining the priority of documents in a web cache. The present invention incorporates the document size and the frequency of file access in determining which documents to keep in the cache and which documents to replace. The priority of a document is determined using a ratio of the frequency of access of the document raised to a first value to the size of the document raised to a second value, wherein the first value and the second value are rational numbers other than one. In one embodiment, the first value is greater than one and the second value is less than one. In another embodiment, the age of the document is also considered in determining the priority of a document. In the present embodiment, the ratio is added to a clock value, wherein the clock value is a running counter associated with the document starting at the time the document was first stored in the web cache.
    Type: Grant
    Filed: May 29, 2001
    Date of Patent: April 8, 2003
    Assignee: Hewlett-Packard Company
    Inventors: Ludmila Cherkasova, Gianfranco Ciardo
  • Patent number: 6542967
    Abstract: A cache object store is organized to provide fast and efficient storage of data as cache objects organized into cache object groups. The cache object store preferably embodies a multi-level hierarchical storage architecture comprising a primary memory-level cache store and, optionally, a secondary disk-level cache store, each of which is configured to optimize access to the cache object groups. These levels of the cache object store further exploit persistent and non-persistent storage characteristics of the inventive architecture.
    Type: Grant
    Filed: June 22, 1999
    Date of Patent: April 1, 2003
    Assignee: Novell, Inc.
    Inventor: Robert Drew Major
  • Patent number: 6532513
    Abstract: An information recording and reproduction apparatus includes a data transfer controller for receiving data to be written transferred from a host computer; a cache data memory divided into a plurality of segments for temporarily storing the data to be written received by the data transfer controller; a segment connection information memory for storing segment connection information representing a logical connection state of the plurality of segments; a buffer memory controller for managing the data to be written temporarily stored in the cache data memory; and a recording and reproduction controller for writing the data to be written temporarily stored in the cache data memory into a recording medium. The buffer memory controller updates the segment connection information so as to change the logical connection state of the plurality of segments.
    Type: Grant
    Filed: November 16, 2000
    Date of Patent: March 11, 2003
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Yoshikazu Yamamoto, Hiroyuki Yabuno, Kenji Takauchi
  • Patent number: 6526479
    Abstract: Various methods of caching web resources include caching in accordance with a number of times accessed, a frequency of access, or a duration of access. One method of caching web resources includes the step of accessing a first web resource. The first web resource is cached, if no other web resource is accessed after a pre-determined period of time. Another method of caching web resources includes the step of accessing a first web resource. The first web resource is cached, if the first web resource is subsequently accessed more than a pre-determined number of times. Another method of caching web resources includes the step of accessing a plurality of web resources. The accessed web resources are cached as cached web resources in accordance with at least one of a number of times accessed, a frequency of access, or a duration of access. An apparatus comprises storage media containing caching logic for caching web resources.
    Type: Grant
    Filed: May 18, 2001
    Date of Patent: February 25, 2003
    Assignee: Intel Corporation
    Inventor: Michael D. Rosenzweig
  • Patent number: 6523092
    Abstract: A method for a cache line replacement policy enhancement to avoid memory page thrashing. The method of one embodiment comprises comparing a memory request address with cache tags to determine if any cache entry in set ‘n’ can match the address. The address is masked to determine if a thrash condition exists. Allocation to set ‘n’ is discouraged if a thrash condition is present.
    Type: Grant
    Filed: September 29, 2000
    Date of Patent: February 18, 2003
    Assignee: Intel Corporation
    Inventor: Blaise B. Fanning
  • Patent number: 6523091
    Abstract: A method for selecting a candidate to mark as overwritable in the event of a cache miss while attempting to avoid a write back operation. The method includes associating a set of data with the cache access request, each datum of the set is associated with a way, then choosing an invalid way among the set. Where no invalid ways exist among the set, the next step is determining a way that is not most recently used among the set. Next, the method determines whether a shared resource is crowded. When the shared resource is not crowded, the not most recently used way is chosen as the candidate. Where the shared resource is crowded, the next step is to determine whether the not most recently used way differs from an associated source in the memory and where the not most recently used way is the same as an associated source in the memory, the not most recently used way is chosen as the candidate.
    Type: Grant
    Filed: August 16, 2001
    Date of Patent: February 18, 2003
    Assignee: Sun Microsystems, Inc.
    Inventors: Anup S. Tirumala, Marc Tremblay
  • Patent number: 6519684
    Abstract: A cache memory system (e.g., a translation-lookaside buffer 100) utilizing a reduced overhead entry selection process for overwriting and updating entries. The disclosed embodiment of the present invention uses a match bit, a detection operation (such as a status probe operation), and an efficient control mechanism to identify a particular translation in a translation-lookaside buffer 100 to be updated or overwritten. Based on the results of the probe operation, the match bit is selectively set or cleared. Next, a control mechanism selects one of two possible indices 110 and 114 (locations) in the translation-lookaside buffer 100 to perform a write operation. The first index 110 corresponds to an existing entry, while the second index 114 corresponds to a random entry to be overwritten. The selection process is essentially completed in a single step via dedicated logic. In this manner, overhead associated with selecting an entry to be updated is minimized.
    Type: Grant
    Filed: November 23, 1999
    Date of Patent: February 11, 2003
    Assignee: Motorola, Inc.
    Inventor: William C. Moyer
  • Publication number: 20030023815
    Abstract: Methods for controlling and storing data in a cache buffer in a storage apparatus having a nonvolatile memory medium are disclosed. Memory cells are logically divided into a plurality of pages. An open status is registered in a counter for each page that has at least some (and usually all) memory cells available to store new data. A full status is registered in the counter for each page that does not have memory cells that are available to store new data. New data is stored in pages having the open status in the counter. The pages can be weighted according to the read command rate and prioritized for reading and writing purposes.
    Type: Application
    Filed: July 11, 2002
    Publication date: January 30, 2003
    Applicant: FUJITSU LIMITED
    Inventors: Koji Yoneyama, Yuichi Hirao, Shigeru Hatakeyama, Aaron Olbrich, Douglas Prins
  • Publication number: 20030009630
    Abstract: In a DSM-CC receiver (12), a signal comprising a periodically repeated plurality of data sections is received. Storage means (14) are provided for catching the data sections included in the signal (13) where the act of accessing a data section results in a reference being created, this reference being removed when no longer being accessed. A reference count is kept for each data section such that a data section is marked for deletion if its reference count falls to zero. There is a further aspect where the storage means (14) are defragmented by noting the data sections that are being referenced and then, in any order, compacting these referenced data sections by relocating them together in one part of the storage means (14) and updating the values of pointers that referred to the moved cells.
    Type: Application
    Filed: July 1, 2002
    Publication date: January 9, 2003
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS
    Inventors: Steven Morris, Octavius J. Morris
  • Patent number: 6496901
    Abstract: A virtual tape system and method for mapping variable sized data blocks from a host to a fixed sized data block structure of a direct access storage device (DASD) utilizes a buffer between the cache and the host. The control logic operates to access the storage device by transferring data between the cache and the buffer in fixed chunk sizes, and in parallel, transferring data between the host and the buffer in data-chained blocks.
    Type: Grant
    Filed: September 22, 1999
    Date of Patent: December 17, 2002
    Assignee: Storage Technology Corporation
    Inventors: Patrick Albert Lloyd De Martine, Scott Cary Hammett, Stephen Samuel Zanowick
  • Patent number: 6493801
    Abstract: An adaptive cache coherent purging protocol includes recognizing system performance, especially latency, is affected by when cache is purged. The occurrence of performance enhancing and degrading events regarding a cache are counted and compared to a threshold. When the threshold is triggered the cache becomes a candidate for purging. In an embodiment, a time out delay is implemented before actual purging occurs. When the threshold is not triggered but a cache event occurs, a fake time out delay is triggered and the count is adaptively either raised, lowered or set to zero in response to performance enhancing and/or degrading events. The effect is to make the actual purging more likely if the history of cache events indicates that the performance would be enhanced thereby or less likely if the history indicates that the performance would be degraded thereby.
    Type: Grant
    Filed: January 26, 2001
    Date of Patent: December 10, 2002
    Assignee: Compaq Computer Corporation
    Inventors: Simon C. Steely, Jr., Nikolaos Hardavellas
  • Patent number: 6490654
    Abstract: A cache memory replacement algorithm replaces cache lines based on the likelihood that cache lines will not be needed soon. A cache memory in accordance with the present invention includes a plurality of cache lines that are accessed associatively, with a count entry associated with each cache line storing a count value that defines a replacement class. The count entry is typically loaded with a count value when the cache line is accessed, with the count value indicating the likelihood that the contents of cache lines will be needed soon. In other words, data which is likely to be needed soon is assigned a higher replacement class, while data that is more speculative and less likely to be needed soon is assigned a lower replacement class. When the cache memory becomes full, the replacement algorithm selects for replacement those cache lines having the lowest replacement class.
    Type: Grant
    Filed: July 31, 1998
    Date of Patent: December 3, 2002
    Assignee: Hewlett-Packard Company
    Inventors: John A. Wickeraad, Stephen B. Lyle, Brendan A. Voge
  • Patent number: 6470425
    Abstract: A cache memory having a plurality of entries includes a hit/miss counter checks a cache hit or a cache miss on each of the plurality of entries, and a write controller which controls an inhibition of a replacement of each of the plurality of entries based on the result of a check made by the hit/miss counter.
    Type: Grant
    Filed: April 17, 2000
    Date of Patent: October 22, 2002
    Assignee: NEC Corporation
    Inventor: Atsushi Yamashiroya
  • Patent number: 6449695
    Abstract: A cache system controls the insertion and deletion of data items using a plurality of utilization lists. When a data item is stored within the data cache, a corresponding data pointer, or other indicator, is stored within the utilization list in a manner indicative of the sequence in which data items were stored in the data cache. When a data item is subsequently retrieved from the data cache, the corresponding data pointer may be altered or moved to indicate that the data item has recently been retrieved. The data pointers corresponding to data items that have never been retrieved will indicate the sequence with which the data items were stored in the cache such that data items may be identified as least recently used (LRU) data items. The data pointers corresponding to data items that have been retrieved provide an indication of the sequence with which the data items have been retrieved such that the most recently retrieved data item is considered the most recently used (MRU) data item.
    Type: Grant
    Filed: May 27, 1999
    Date of Patent: September 10, 2002
    Assignee: Microsoft Corporation
    Inventors: Alexandre Bereznyi, Sanjeev Katariya
  • Publication number: 20020120817
    Abstract: A method for determining which way of an N-way set associative cache should be filled with replacement data upon generation of a cache miss when all of the ways contain valid data. A first choice for way selection and at least one additional choice for way selection are generated. If the status of the way corresponding to the first choice differs from a bias status, a way corresponding to one of the additional choices is designated as the way to be filled with replacement data. Otherwise, the way corresponding to the first choice is designated as the way to be filled with replacement data. Status information for a given way may include any data which is maintained on a cache line by cache line basis, but is preferably data which is maintained for purposes other than way selection. For example, status information might include indications as to whether a cache line is shared or private, clean or dirty.
    Type: Application
    Filed: April 19, 2002
    Publication date: August 29, 2002
    Inventor: Gregg B. Lesartre
  • Patent number: 6425057
    Abstract: A method and system for caching objects and replacing cached objects in an object-transfer environment maintain a dynamic indicator (Pr(f)) for each cached object, with the dynamic indicator being responsive to the frequency of requests for the object and being indicative of the time of storing the cached object relative to storing other cached objects. In a preferred embodiment, the size of the object is also a factor in determining the dynamic indicator of the object. In the most preferred embodiment, the cost of obtaining the object is also a factor. A count of the frequency of requests and the use of the relative time of storage counterbalance each other with respect to maintaining a cached object in local cache. That is, a high frequency of requests favors maintaining the object in cache, but a long period of cache favors evicting the object. Thus, cache pollution is less likely to occur.
    Type: Grant
    Filed: August 27, 1998
    Date of Patent: July 23, 2002
    Assignee: Hewlett-Packard Company
    Inventors: Ludmila Cherkasova, Martin F. Arlitt, Richard J. Friedrich, Tai Jin
  • Patent number: 6425058
    Abstract: A set associative cache includes a cache controller, a directory, and an array including at least one congruence class containing a plurality of sets. The plurality of sets are partitioned into multiple groups according to which of a plurality of information types each set can store. The sets are partitioned so that at least two of the groups include the same set and at least one of the sets can store fewer than all of the information types. The cache controller then implements different cache policies for at least two of the plurality of groups, thus permitting the operation of the cache to be individually optimized for different information types.
    Type: Grant
    Filed: September 7, 1999
    Date of Patent: July 23, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Lakshminarayana Baba Arimilli, James Stephen Fields, Jr.
  • Patent number: 6415358
    Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of processors that are each associated with a respective one of a plurality of caches. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the data item. A coherency indicator in the first cache is set to a first state that indicates that the data item is valid. In response to another of the caches indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator in the first cache is updated to a second state that indicates that the address tag is valid and that the first data item in the first cache is invalid.
    Type: Grant
    Filed: February 17, 1998
    Date of Patent: July 2, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis
  • Publication number: 20020078304
    Abstract: An algorithm for selecting a directory entry in a multiprocessor-node system. In response to a memory request from a processor in a processor node, the algorithm finds an available entry to store information about the requested memory line. If at least one entry is available, then the algorithm uses one of the available entries. Otherwise, the algorithm searches for a “shared” entry. If at least one shared entry is available, then the algorithm uses one of the shared entries. Otherwise, the algorithm searches for a “dirty” entry. If at least one dirty entry is available, then the algorithm uses one of the dirty entries. In selecting a directory entry, the algorithm uses a “least-recently-used” (LRU) algorithm because an entry that was not recently used is more likely to be stale. Further, to improve system performance, the algorithm preferably uses a shared entry before using a dirty entry.
    Type: Application
    Filed: May 3, 1999
    Publication date: June 20, 2002
    Inventors: NABIL N. MASRI, WOLF-DIETRICH WEBER
  • Publication number: 20020073283
    Abstract: The present invention uses feedback to determine the size of an object cache. The size of the cache, (i.e., its budget), varies and is determined based on feedback from the persistent object system. Persistent objects are evicted from the cache if the storage for persistent objects exceeds the budget. If the storage is less than the budget then persistent objects in the heap are retained while new persistent objects are added to the cache.
    Type: Application
    Filed: December 13, 2000
    Publication date: June 13, 2002
    Inventors: Brian T. Lewis, Bernd J.W. Mathiske, Neal M. Gafter, Michael J. Jordan
  • Patent number: 6397302
    Abstract: A multiprocessor system includes a plurality of processors, each processor having one or more caches local to the processor, and a memory controller connectable to the plurality of processors and a main memory. The memory controller manages the caches and the main memory of the multiprocessor system. A processor of the multiprocessor system is configurable to evict from its cache a block of data. The selected block may have a clean coherence state or a dirty coherence state. The processor communicates a notify signal indicating eviction of the selected block to the memory controller. In addition to sending a write victim notify signal if the selected block has a dirty coherence state, the processor sends a clean victim notify signal if the selected block has a clean coherence state.
    Type: Grant
    Filed: June 18, 1998
    Date of Patent: May 28, 2002
    Assignee: Compaq Information Technologies Group, L.P.
    Inventors: Rahul Razdan, James B. Keller, Richard E. Kessler
  • Patent number: 6385699
    Abstract: A computerized method, system and computer program product for managing an object store is disclosed. An exemplary method includes the the steps of: collecting performance statistics about storage repositories from which an object(s) can be retrieved; retrieving an object from a storage repository, in response to an object reference; determining a reference probability (RFP) for the object; determining and associating a replacement penalty (RPP) with the object wherein the RPP is based on the one or more performance statistics and the RFP; and storing the object and an associated RPP for the object. The storage repositories could be locally attached devices, network sites, and/or remotely attached devices. If there is insufficient space in the object store for a new object, an object(s) can be replaced with the new object based on the associated RPP of the cached objects. Alternatively, the resolution of one or more objects in the object store can be reduced until sufficient space is available.
    Type: Grant
    Filed: April 10, 1998
    Date of Patent: May 7, 2002
    Assignee: International Business Machines Corporation
    Inventors: Gerald Parks Bozman, John Timothy Robinson, William Harold Tetzlaff
  • Publication number: 20020049889
    Abstract: A data processing apparatus has a main memory that contains memory locations with mutually different access latencies. Information from the main memory is cached in a cache memory. When cache replacement is needed selection of a cache replacement location depends on differences in the access latencies of the main memory locations for which replaceable cache locations are in use. When an access latency of a main memory location cached in the replaceable cache memory location is relatively smaller than an access latency of other main memory locations cached in other replaceable cache memory locations, the cached data for that main memory location is replaced by preference over data for the other main memory locations, because of its smaller latency.
    Type: Application
    Filed: June 28, 2001
    Publication date: April 25, 2002
    Inventors: Jan Hoogerbrugge, Paul Stravers
  • Patent number: 6378042
    Abstract: A system and method for operating an associative memory cache device in a computer system. The system comprises a search client configured to search for data in a caching associative memory such as a content addressable memory (CAM); a caching associative memory element coupled to the search client for generating a matching signal; and a associative memory element coupled to the caching associative element configured to search for data not stored in the caching associative memory element. The search client issues a search request for data to associative cache element. If the matching data is found there, then such matching data is returned to the search client. Alternatively, if the data is not found, then the search request is issued to the main associative memory. The least frequently used data or the least recently used data in the associative memory cache are replaced with the matching data and the higher priority data.
    Type: Grant
    Filed: August 10, 2000
    Date of Patent: April 23, 2002
    Assignee: Fast-Chip, Inc.
    Inventors: Alex E. Henderson, Walter E. Croft
  • Patent number: 6378046
    Abstract: A processor is programmed for accessing data-items from a matrix of rows and columns, access being constrained to a moving window. A cache memory caches data for the window. The cache memory makes a location used for a first data-item from an earliest row available for reuse when the window moves along the row direction, and retrieves a second data item for a latest row of the window into the cache memory. Data for the latest row may be written into the location just made available for reuse. The position of the first data-item along the row direction of the matrix trails the position of the second data-item along the row direction of the matrix at least by the width of the window.
    Type: Grant
    Filed: December 21, 1999
    Date of Patent: April 23, 2002
    Assignee: U.S. Philips Corporation
    Inventors: Erwin B. Bellers, Alphonsius A. J. De Lange