Combined Replacement Modes Patents (Class 711/134)
  • Patent number: 6910152
    Abstract: A block repair device is disclosed for use in a semiconductor memory having an array including a defective cell and a redundant row. The block repair device includes a set of fuses, antifuses, or flash EEPROM cells to store a block repair configuration that determines the dimensions (e.g., the number of rows and columns spanned) of a repair block used to repair the defective cell. Routing circuitry, such as multiplexer circuitry, in the block repair device is directed by the stored block repair configuration to output selected row and column address bits from received row and column addresses in a selected ratio. Comparison circuitry in the block repair device then compares the row and column address bits output by the routing circuitry with a stored portion of the address of the defective cell that defines the repair block. When a match occurs, the comparison circuitry implements a block repair by activating the redundant row and by causing data to be written to or read from the activated redundant row.
    Type: Grant
    Filed: August 17, 2001
    Date of Patent: June 21, 2005
    Assignee: Micron Technology, Inc.
    Inventor: Greg A. Blodgett
  • Patent number: 6910106
    Abstract: A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner.
    Type: Grant
    Filed: December 20, 2002
    Date of Patent: June 21, 2005
    Assignee: Microsoft Corporation
    Inventors: Stuart Sechrest, Michael R. Fortin, Mehmet Iyigun, Cenk Ergan
  • Patent number: 6904501
    Abstract: A cache memory includes a plurality of data memory blocks and a code memory block. Each data memory block has a plurality of storage locations and has a particular storage location identified by a same index value. The code memory block has a plurality of code values with a particular code value being associated with the same index value. The particular code value is operable to identify which ones of the particular storage locations associated with the same index value are locked to prevent alteration of contents therein. The particular code value is also operable to identify which particular storage location has been most recently used and which particular storage location has been least recently used of the particular storage locations associated with the same index value.
    Type: Grant
    Filed: June 17, 2002
    Date of Patent: June 7, 2005
    Assignee: Silicon Graphics, Inc.
    Inventors: David X. Zhang, Kenneth C. Yeager
  • Patent number: 6901484
    Abstract: Storage-Assisted QoS. To provide storage-assisted QoS, a discriminatory storage system able to enforce a service discrimination policy within the storage system can include re-writable media; a storage system controller; a cache; and, a QoS enforcement processor configured to selectively evict entries in the cache according QoS terms propagated into the storage system through the storage system controller.
    Type: Grant
    Filed: June 5, 2002
    Date of Patent: May 31, 2005
    Assignee: International Business Machines Corporation
    Inventors: Ronald P. Doyle, David L. Kaminsky, David M. Ogle
  • Patent number: 6901483
    Abstract: A method for selecting a line to replace in an inclusive set-associative cache memory system which is based on a least recently used replacement policy but is enhanced to detect and give special treatment to the reloading of a line that has been recently cast out. A line which has been reloaded after having been recently cast out is assigned a special encoding which temporarily gives priority to the line in the cache so that it will not be selected for replacement in the usual least recently used replacement process. This method of line selection for replacement improves system performance by providing better hit rates in the cache hierarchy levels above, by ensuring that heavily used lines in the levels above are not aged out of the levels below due to lack of use.
    Type: Grant
    Filed: October 24, 2002
    Date of Patent: May 31, 2005
    Assignee: International Business Machines Corporation
    Inventors: John T. Robinson, Robert B. Tremaine, Michael E. Wazlowski
  • Patent number: 6895473
    Abstract: A data control device capable of high-quality, high-efficiency control for speeding up data processing, thus permitting improvement of the throughput of a system. Attribute analyzing unit analyzes an attribute of data, and a main memory stores setting information of the data in a region corresponding to the attribute. A highway cache memory stores the data, and also receives and transmits the data on a highway. A processor performs an operation on the data in accordance with the setting information. A data cache memory is interposed between the processor and the main memory and stores the setting information.
    Type: Grant
    Filed: November 12, 2002
    Date of Patent: May 17, 2005
    Assignee: Fujitsu Limited
    Inventors: Masao Nakano, Takeshi Toyoyama, Yasuhiro Ooba
  • Patent number: 6895466
    Abstract: A method to assign a premigration pseudotime attribute and a stubbing pseudotime attribute to a logical volume. The method defines a plurality of host requests, and associates with each host request a pseudotime range. The method further maintains a logical volume in a first information storage medium at a first time, and determines if a user provides a host request for that logical volume. If a user provides a host request for that logical volume, then the method assigns to the logical volume a premigration pseudotime attribute, and a stubbing pseudotime attribute, comprising a time within the pseudotime range associated with the host request. If, on the other hand, a user does not provide a host request for the logical volume, then the method assigns to that logical volume the first time as said premigration pseudotime attribute, and said first time as said stubbing pseudotime attribute.
    Type: Grant
    Filed: August 29, 2002
    Date of Patent: May 17, 2005
    Assignee: International Business Machines Corporation
    Inventors: Kevin L. Gibble, Gregory T. Kishi, Jonathan W. Peak
  • Patent number: 6883066
    Abstract: In a data storage device, a system of method of optimizing cache management. A method includes selecting a set of cache management algorithms associated with a predetermined pattern in a sequence of commands. Statistics based on a sequence of commands are gathered and a pattern is detected from the statistics. The pattern is associated with predetermined known patterns to identify a set of cache management algorithms that are optimized for the known pattern. A system includes usage statistics that are correlated among a set of known usage patterns. A switch chooses the set of cache management algorithms associated with the known pattern that most closely matches the usage statistics.
    Type: Grant
    Filed: December 11, 2001
    Date of Patent: April 19, 2005
    Assignee: Seagate Technology LLC
    Inventors: James Arthur Herbst, Carol Michiko Baum, Robert William Dixon
  • Patent number: 6883068
    Abstract: Methods and systems are provided for processing a cache. A candidate object is identified for updating. A fresh object corresponding to the candidate object is obtained if it is determined that a newer version of the candidate object is available. A destination buffer is selected from a group of primary and non-primary buffers based on an amount of available space in a primary buffer. The fresh object is stored in the destination buffer.
    Type: Grant
    Filed: December 17, 2001
    Date of Patent: April 19, 2005
    Assignee: Sun Microsystems, Inc.
    Inventors: Panagiotis Tsirigotis, Rajeev Chawla, Sanjay R. Radia
  • Patent number: 6877067
    Abstract: In a multiprocessor system in which a plurality of processors share an n-way set-associative cache memory, a plurality of ways of the cache memory are divided into groups, one group for each processor. When a miss-hit occurs in the cache memory, one way is selected for replacement from the ways belonging to the group corresponding to the processor that made a memory access but caused the miss-hit. When there is an off-line processor, the ways belonging to that processor are re-distributed to the group corresponding to an on-line processor to allow the on-line processor to use those ways.
    Type: Grant
    Filed: June 12, 2002
    Date of Patent: April 5, 2005
    Assignee: NEC Corporation
    Inventor: Shinya Yamazaki
  • Patent number: 6868484
    Abstract: A cache includes an error circuit for detecting errors in the replacement data. If an error is detected, the cache may update the replacement data to eliminate the error. For example, a predetermined, fixed value may be used for the update of the replacement data. Each of the cache entries corresponding to the replacement data may be represented in the fixed value. In one embodiment, the error circuit may detect errors in the replacement data using only the replacement data (e.g. no parity or ECC information may be used). In this manner, errors may be detected even in the presence of multiple bit errors which may not be detectable using parity/ECC checking.
    Type: Grant
    Filed: April 10, 2003
    Date of Patent: March 15, 2005
    Assignee: Broadcom Corporation
    Inventor: Erik P. Supnet
  • Patent number: 6865648
    Abstract: Destaging activities in a data storage system are controlled by providing a write pending list of elements, where each element is defined to store information related to a cache memory data element for which a write to storage is pending, and maintaining the write pending list so that destaging of a data element can be based on the maturity of the pending write.
    Type: Grant
    Filed: June 24, 2002
    Date of Patent: March 8, 2005
    Assignee: EMC Corporation
    Inventors: Amnon Naamad, Yechiel Yochai, Sachin More
  • Patent number: 6857045
    Abstract: In a first aspect, a method is provided for updating a compressed cache. The method includes the steps of (1) initiating an update routine for replacing first data stored within the cache with second data, wherein a first section of a compressed data band stored in the cache includes the first data and a second section of the compressed data band includes third data; and (2) in response to initiating the update routine, replacing the first data within the compressed data band with the second data without decompressing the third data. Numerous other aspects are provided.
    Type: Grant
    Filed: January 25, 2002
    Date of Patent: February 15, 2005
    Assignee: International Business Machines Corporation
    Inventors: Robert Edward Galbraith, Adrian Cuenin Gerhard, Brian James King, William Joseph Maitland, Jr., Timothy Jerry Schimke
  • Patent number: 6839809
    Abstract: Methods and apparatus are described for caching objects in a network cache. At least two memory queues are provided for storing the objects. Newly cached objects are stored in a first memory queue. Only selected objects are stored in a second memory queue, the selected objects having been accessed at least once while in the first memory queue.
    Type: Grant
    Filed: May 31, 2000
    Date of Patent: January 4, 2005
    Assignee: Cisco Technology, Inc.
    Inventors: Stewart Forster, Martin Kagan, James A. Aviani, Jr.
  • Patent number: 6836825
    Abstract: One embodiment of the present invention provides a system for synchronizing a cache in a computer system through a peer-to-peer refreshing operation. During operation, the system determines the age of an entry in the cache. If the age of the entry exceeds a life span for the entry, the system invalidates the entry in the cache. The system subsequently refreshes the entry by retrieving an updated version of the entry from a peer of the computer system, if possible, instead of from a centralized source for the entry.
    Type: Grant
    Filed: July 1, 2002
    Date of Patent: December 28, 2004
    Assignee: Sun Microsystems, Inc.
    Inventor: Max K. Goff
  • Patent number: 6834329
    Abstract: A data grouping means divides data items stored in a cache memory section into groups of data having different access patterns. The priority assigning means assigns an order of priorities to data items in each group that the priority assigning means manages according to an individual caching algorithm. The lowest priority determining means determines the lowest priority group when there is not enough unused memory space in the cache memory section and it is necessary to purge a data item. The data operating means purges the lowest priority data in the lowest priority group. Thus the groups of data having different access patterns can be cached effectively.
    Type: Grant
    Filed: July 9, 2002
    Date of Patent: December 21, 2004
    Assignee: NEC Corporation
    Inventors: Shigero Sasaki, Atsuhiro Tanaka, Kosuke Tatsukawa
  • Patent number: 6829682
    Abstract: A method for controlling the operation of a dynamic random access memory (DRAM) system, the DRAM system having a plurality of memory cells organized into rows and columns, is disclosed. In an exemplary embodiment of the invention, the method includes enabling a destructive read mode, the destructive read mode for destructively reading a bit of information stored within an addressed DRAM memory cell. The destructively read bit of information is temporarily stored into a temporary storage device. A delayed write back mode is enabled, the delayed write back mode for restoring the bit of information back to the addressed DRAM memory cell at a later time. The execution of the delayed write back mode is then scheduled, depending upon the availability of space within the temporary storage device.
    Type: Grant
    Filed: April 26, 2001
    Date of Patent: December 7, 2004
    Assignee: International Business Machines Corporation
    Inventors: Toshiaki Kirihata, Sang Hoo Dhong, Hwa-Joon Oh, Matthew Wordeman
  • Patent number: 6829679
    Abstract: Caching memory contents differently based on the region to which the memory has been partitioned or allocated is disclosed. A first region of a first line of memory to be cached is determined. The memory has a number of regions, including the first region, over which the lines of memory, including the first line, are partitioned. Each region has a first variable having a corresponding second variable. If the first variable for any region is greater than its corresponding second variable, one such region is selected as a second region. A line from the lines of the memory currently stored in the cache and partitioned to the second region is selected as the second line. The second line is replaced with the first line in the cache, the first variable for the second region is decremented, and the first variable for the first region is incremented.
    Type: Grant
    Filed: November 9, 2001
    Date of Patent: December 7, 2004
    Assignee: International Business Machines Corporation
    Inventors: Donald R. DeSota, Thomas D. Lovett
  • Patent number: 6823426
    Abstract: Disclosed are a system and method of replacing data in cache ways of a cache memory array. If one or more cache ways are locked from replacement, a cache way may be selected from among the unlocked cache ways based upon a pseudo random selection scheme.
    Type: Grant
    Filed: December 20, 2001
    Date of Patent: November 23, 2004
    Assignee: Intel Corporation
    Inventors: Marc A. Goldschmidt, Roger W. Luce
  • Publication number: 20040221110
    Abstract: A cache is configured to receive direct access transactions. Each direct access transaction explicitly specifies a way of the cache. The cache may alter the state of its replacement policy in response to a direct access transaction explicitly specifying a particular way of the cache. The state may be altered such that a succeeding cache miss causes an eviction of the particular way. Thus, a direct access transaction may be used to provide a deterministic setting to the replacement policy, providing predictability to the entry selected to store a subsequent cache miss. In one embodiment, the replacement policy may be a pseudo-random replacement policy. In one embodiment, a direct access transaction also explicitly specifies a cache storage entry to be accessed in response to the transaction.
    Type: Application
    Filed: June 4, 2004
    Publication date: November 4, 2004
    Inventors: Joseph B. Rowlands, Michael P. Dickman
  • Patent number: 6813684
    Abstract: Disclosed is a disk system for controlling divided areas of a cache memory. Identification information that denotes whether data to be accessed is user data or meta data is added to each I/O command issued from a CPU. A disk controller, when receiving such an I/O command, selects a target virtual area from among a plurality of virtual areas set in the cache memory according to the identification information. When new data is to be stored in the cache memory upon the execution of the I/O command, the disk controller records the number of the selected virtual area in the cache memory in correspondence with the new data. A cache data replacement is executed independently for each cache area, thereby a predetermined upper limit size of each cache memory area can be kept.
    Type: Grant
    Filed: August 19, 2002
    Date of Patent: November 2, 2004
    Assignee: Hitachi, Ltd.
    Inventors: Akihiko Sakaguchi, Shinji Fujiwara
  • Patent number: 6813692
    Abstract: In a DSM-CC receiver (12), a signal comprising a periodically repeated plurality of data sections is received. Storage means (14) are provided for catching the data sections included in the signal (13) where the act of accessing a data section results in a reference being created, this reference being removed when no longer being accessed. A reference count is kept for each data section such that a data section is marked for deletion if its reference count falls to zero. There is a further aspect where the storage means (14) are defragmented by noting the data sections that are being referenced and then, in any order, compacting these referenced data sections by relocating them together in one part of the storage means (14) and updating the values of pointers that referred to the moved cells.
    Type: Grant
    Filed: July 1, 2002
    Date of Patent: November 2, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Steven Morris, Octavius J. Morris
  • Publication number: 20040215889
    Abstract: A method and apparatus in a data processing system for protecting against displacement of two types of cache lines using a least recently used cache management process. A first member in a class of cache lines is selected as a first substitute victim. The first substitute victim is unselectable by the least recently used cache management process, and the second substitute victim is associated with a selected member in the class of cache lines. A second member in the class of cache lines is selected as a second substitute victim. The second victim is unselectable by the least recently used cache management process, and the second substitute victim is associated with the selected member in the class of cache lines. One of the first or second substitute victims are replaced in response to a selection of the selected member as a victim when a cache miss occurs, wherein the selected member remains in the class of cache lines.
    Type: Application
    Filed: April 28, 2003
    Publication date: October 28, 2004
    Applicant: International Business Machines Corporation
    Inventors: Robert Alan Cargnoni, Guy Lynn Guthrie, William John Starke
  • Publication number: 20040215885
    Abstract: A method of reducing errors in a cache memory of a computer system (e.g., an L2 cache) by periodically issuing a series of purge commands to the L2 cache, sequentially flushing cache lines from the L2 cache to an L3 cache in response to the purge commands, and correcting errors (single-bit) in the cache lines as they are flushed to the L3 cache. Purge commands are issued only when the processor cores associated with the L2 cache have an idle cycle available in a store pipe to the cache. The flush rate of the purge commands can be programmably set, and the purge mechanism can be implemented either in software running on the computer system, or in hardware integrated with the L2 cache. In the case of the software, the purge mechanism can be incorporated into the operating system. In the case of hardware, a purge engine can be provided which advantageously utilizes the store pipe that is provided between the L1 and L2 caches.
    Type: Application
    Filed: April 25, 2003
    Publication date: October 28, 2004
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert Alan Cargnoni, Guy Lynn Guthrie, Kevin Franklin Reick, Derek Edward Williams
  • Patent number: 6785771
    Abstract: Provided is a method, system, and program for destaging data from a first computer readable medium to a second computer readable medium. A list of entries indicating data blocks in the first computer readable medium is scanned. For each entry scanned, a determination is made as to whether the data block indicated in the scanned entry satisfies a criteria. If the data block indicated in the scanned entry satisfies the criteria, then a destage operation is called to destage the data block in the scanned entry from the first computer readable medium to the second computer readable medium. If the called destage operation is not initiated, then the scanned entry is removed from the cache list. The removed scanned entry is added to one destage wait list. During one destage operation, data blocks indicated in entries in the destage wait list are destaged.
    Type: Grant
    Filed: December 4, 2001
    Date of Patent: August 31, 2004
    Assignee: International Business Machines Corporation
    Inventors: Kevin John Ash, Brent Cameron Beardsley, Michael Thomas Benhase, Joseph Smith Hyde, II, Thomas Charles Jarvis, Steven Robert Lowe, David Frank Mannenbach
  • Patent number: 6785770
    Abstract: A data processing apparatus has a main memory that contains memory locations with mutually different access latencies. Information from the main memory is cached in a cache memory. When cache replacement is needed selection of a cache replacement location depends on differences in the access latencies of the main memory locations for which replaceable cache locations are in use. When an access latency of a main memory location cached in the replaceable cache memory location is relatively smaller than an access latency of other main memory locations cached in other replaceable cache memory locations, the cached data for that main memory location is replaced by preference over data for the other main memory locations, because of its smaller latency.
    Type: Grant
    Filed: June 28, 2001
    Date of Patent: August 31, 2004
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Jan Hoogerbrugge, Paul Stravers
  • Patent number: 6775745
    Abstract: Methods and an apparatus for a caching mechanism which improves system performance are provided. One exemplary method includes reading files in response to a request from an operating system. Then, copies of the read files are stored in a cache where the cache is located within a random access memory of the computer. Next, frequency factors are assigned to each of the files stored in the cache, where the frequency factors indicate how often each of the corresponding files has been accessed by the operating system. Then, the frequency factors are scanned in response to a capacity of the cache being attained. Next, a least frequently and least recently used file is identified. Then, the least frequently and least recently used file is eliminated to liberate capacity of the cache.
    Type: Grant
    Filed: September 7, 2001
    Date of Patent: August 10, 2004
    Assignee: Roxio, Inc.
    Inventors: Gregory P. Fry, Carl P. Fry
  • Patent number: 6772296
    Abstract: One embodiment of the present invention provides a system that facilitates storage of objects in a persistent memory with asymmetric access characteristics. The system operates by receiving an access to an object. If the access is a read access, the system looks up the object through an indirectory. This indirectory includes an entry that points to a location of the object within the persistent memory if updates to the object have been recorded in the persistent memory. Otherwise, the indirectory entry points to a location of the object within a volatile memory. If the object is located in the volatile memory, the system reads the object from the volatile memory. Otherwise, if the object is located in the persistent memory, the system reads the object from the persistent memory directly without first copying the object into the volatile memory. In one embodiment of the present invention, if the access is a write access, the system looks up the object through the indirectory.
    Type: Grant
    Filed: August 10, 2000
    Date of Patent: August 3, 2004
    Assignee: Sun Microsystems, Inc.
    Inventor: Bernd J. W. Mathiske
  • Patent number: 6763420
    Abstract: A plurality of cache addressing functions are stored in main memory. A processor which executes a program selects one of the stored cache addressing functions for use in a caching operation during execution of a program by the processor.
    Type: Grant
    Filed: July 13, 2001
    Date of Patent: July 13, 2004
    Assignee: Micron Technology, Inc.
    Inventor: Ole Bentz
  • Patent number: 6757841
    Abstract: Dynamic switching between mirrored and non-mirrored implementations is provided by formatting a system to operate in a simulated mirrored mode. A null location is used as a placeholder for a storage location that can be subsequently added. The null location is replaced in whole or in part by the storage location when actual mirrored node is desired, thereby making the switch to actual mirrored mode appear dynamic and transparent to a user. A first location is designated as a first part of a mirror, and the null location is designated to simulate a second part of the mirror. Data and/or operating systems may be mirrored.
    Type: Grant
    Filed: September 14, 2000
    Date of Patent: June 29, 2004
    Assignee: Intel Corporation
    Inventors: Jonathan Gitlin, Kevin W. Bross
  • Patent number: 6748495
    Abstract: A random number generator circuit includes a primary circuit configured to generate a value within a first range and a secondary circuit configured to generate a value within a second range. A detector circuit detects whether or not the value from the primary circuit is within the desired output range for the random number generator circuit, and selects either the value from the primary circuit or the value from the secondary circuit in response. The second range is the desired output range and the first range encompasses the second range. In one embodiment, the primary circuit has complex harmonics but may generate values outside the desired range. The secondary circuit may have less complex harmonics, but may generate values only within the desired range. In one implementation, the random number generator circuit is used to generate a replacement way for a cache.
    Type: Grant
    Filed: May 15, 2001
    Date of Patent: June 8, 2004
    Assignee: Broadcom Corporation
    Inventors: Joseph B. Rowlands, Chun H. Ning
  • Patent number: 6748494
    Abstract: A file control device having physical storage devices and logical storage devices, which prevents competition for access to the physical storage device and avoids a decline in performance. When adding a new block to the cache memory or when ejecting a block from cache memory, a block with the lowest access frequency out of data retained in a physical storage device having the lowest access frequency is determined for ejection. The file control device concurrently monitors storage device priority information in addition to data priority information to control transfer of data between the storage device and the cache memory.
    Type: Grant
    Filed: March 17, 2000
    Date of Patent: June 8, 2004
    Assignee: Fujitsu Limited
    Inventor: Mitsuhiko Yashiro
  • Patent number: 6748487
    Abstract: A disk cache controlling method and a disk array system which includes a plurality of disk devices and a disk cache. Data is divided and stored into the disk devices and a plurality of volumes are assigned to the disk devices. A disk array controller controls the disk devices. Assignment of new disk cache areas includes dividing each of the volumes into areas with an arbitrary fixed length, determining an access frequency for each of the divided areas of each of the volumes; and changing assignment of the disk cache areas to the divided areas according to an access frequency for each divided area. The disk cache areas are assigned to the divided areas according to a divided area having an access frequency which is the lowest.
    Type: Grant
    Filed: August 3, 2000
    Date of Patent: June 8, 2004
    Assignee: Hitachi, Ltd.
    Inventors: Yoshifumi Takamoto, Kiyohiro Obara
  • Patent number: 6745212
    Abstract: Disclosed is a system, method, and an article of manufacture for preferentially keeping an uncopied data set in one of two storage devices in a peer-to-peer environment when data needs to be removed from the storage devices. Each time a data set is modified or newly created, flags are used to denote whether the data set needs to be copied from one storage device to the other. The preferred embodiments modify the timestamp for each uncopied data set by adding a period of time, and thus give preference to the uncopied data set when the data from the storage device is removed based on the least recently used as denoted by timestamp of each data set. Once the data set is copied, the timestamp is set back to normal by subtracting the same period of time added on when the data set was flagged as needing to be copied.
    Type: Grant
    Filed: June 27, 2001
    Date of Patent: June 1, 2004
    Assignee: International Business Machines Corporation
    Inventors: Gregory Tad Kishi, Mark Allan Norman, Jonathan Wayne Peake, William Henry Travis
  • Patent number: 6745291
    Abstract: An N-way set associative data cache system comprises a cache controller adapted to receive a request for data and instructions. The cache controller includes a cache buffer register for storing the requests for a line of information in the form of a page tag address and line address. The line address is stored in the buffer register as a pointer into a directory associated with each of the N-ways for determining where the line being accessed resides. If the page tag address matches one of the page entry addresses in one of the directories, there is a hit, but if not, the line of data must be fetched by a cache fill request. The line of data is retrieved from an L2 cache or main memory and written into the line of one of the ways at the line address being accessed. A novel LRU ordering tree or look-up table is provided for determining concurrently the one line in the number of N-lines in the cache to be replaced with the new line of data in the event of a miss.
    Type: Grant
    Filed: August 8, 2000
    Date of Patent: June 1, 2004
    Assignee: Unisys Corporation
    Inventor: Kenneth Lindsay York
  • Patent number: 6742084
    Abstract: A caching method for selecting variable size data blocks for replacement or removal from a cache includes determining the size and the unreferenced time interval of each block in the cache. The size of a block is the amount of cache space taken up by the block. The unreferenced time interval of a block is the time that has elapsed since the block was last accessed, and may be determined using a least recently used (LRU) algorithm. The recall probability of each block in the cache is then determined. The recall probability of a block is a function of its unreferenced time interval and possibly size and other auxiliary parameters. The caching method then determines a quality factor (q) for each block. The (q) of a block is a function of its recall probability and size. The caching method concludes with removing from the cache the block with the lowest (q).
    Type: Grant
    Filed: May 4, 2000
    Date of Patent: May 25, 2004
    Assignee: Storage Technology Corporation
    Inventors: Richard J. Defouw, Alan Sutton, Ronald W. Korngiebel
  • Patent number: 6742148
    Abstract: A system for testing a memory page of a computer while an operating system is active. The system includes a hook function and a pattern generator. The hook function has software instructions that takes the place of a memory allocation/release scheme of the operating system. The system stores a test pattern generated by the pattern generator in the memory page upon receiving a request to release the memory page. Upon receiving a request to allocate the memory page, the system verifies the test pattern is correct to ensure the memory page is not defective. If the test pattern is incorrect, the defective memory page is removed from service.
    Type: Grant
    Filed: March 5, 2001
    Date of Patent: May 25, 2004
    Assignee: PC-Doctor Inc.
    Inventor: Aki Korhonen
  • Patent number: 6738866
    Abstract: A data buffer memory management method and system is provided for increasing the effectiveness and efficiency of buffer replacement selection. Hierarchical Victim Selection (HVS) identifies hot buffer pages, warm buffer pages and cold buffer pages through weights, reference counts, reassignment of levels and ageing of levels, and then explicitly avoids victimizing hot pages while favoring cold pages in the hierarchy. Unlike LRU, pages in the system are identified by both a static manner (through weights) and in a dynamic manner (through reference counts, reassignment of levels and ageing of levels). HVS provides higher concurrency by allowing pages to be victimized from different levels simultaneously. Unlike other approaches, Hierarchical Victim Selection provides the infrastructure for page cleaners to ensure that the next candidate victims will be clean pages by segregating dirty pages in hierarchical levels having multiple separate lists so that the dirty pages may be cleaned asynchronously.
    Type: Grant
    Filed: May 8, 2001
    Date of Patent: May 18, 2004
    Assignee: International Business Machines Corporation
    Inventor: Edison L. Ting
  • Publication number: 20040078526
    Abstract: A system for approximating a least recently used (LRU) algorithm for memory replacement in a cache memory. In one system example, the cache memory comprises memory blocks allocated into sets of N memory blocks. The N memory blocks are allocated as M super-ways of N/M memory blocks where N is greater than M. An index identifies the set of N memory blocks. A super-way hit/replacement tracking state machine tracks hits and replacements to each super-way and maintains state corresponding to an order of hits and replacements for each super-way where the super-ways are ordered from the MRU to the LRU. Storage for the state bits is associated with each index entry where the state bits include code bits associated with a memory block to be replaced within a LRU super-way. LRU logic is coupled to the super-way hit/replacement tracking state machine to select an LRU super-way as a function of the super-way hit and replacement history.
    Type: Application
    Filed: October 21, 2002
    Publication date: April 22, 2004
    Applicant: Silicon Graphics, Inc.
    Inventor: David X. Zhang
  • Patent number: 6715039
    Abstract: Techniques and criteria are used in connection with promoting a slot within a cache in the form of a replacement queue. A cache slot may be promoted based on an inequality that considers the following criteria: probability of losing a cache hit, gaining a cache hit, and the price or cost associated with promoting a slot. The foregoing criteria may be used in accordance with a predetermined promotion policy when the replacement queue is in a locked state and an unlocked state, or only when the replacement queue is in a locked state. Different costs may be associated with the state of the replacement queue as locked or unlocked as the replacement queue may be locked in connection with operations that are performed on the replacement queue. The cost associated with a locked replacement queue may be different than the cost associated with an unlocked replacement queue. Different thresholds and values associated with the foregoing criteria may be specified as dynamic system parameters.
    Type: Grant
    Filed: September 12, 2001
    Date of Patent: March 30, 2004
    Assignee: EMC Corporation
    Inventors: Orit Levin Michael, Ron Arnan, Amnon Naamad, Sachin More
  • Publication number: 20040044861
    Abstract: In one embodiment, a method is provided. The method of this embodiment may include determining whether requested data is stored in a memory. If the requested data is not stored in the memory, the method may include determining whether a plurality of requests to access the requested data have occurred during a predetermined number of most recent data accesses. If the plurality of requests to access the requested data have occurred during the predetermined number of most recent data accesses, the method may also include storing the requested data in the memory. Of course, many variations, modifications, and alternatives are possible without departing from this embodiment.
    Type: Application
    Filed: August 30, 2002
    Publication date: March 4, 2004
    Inventors: Joseph S. Cavallo, Stephen J. Ippolito
  • Patent number: 6694393
    Abstract: A program file or other type of information file for use in an embedded system is partially compressed in a host device and subsequently transferred to a non-volatile memory of the embedded system. The compressed portion of the file may include non-relocation data such as data sections, text sections, symbol tables, etc. The uncompressed portion includes relocation data such as section headers or a file header which identify one or more destination locations for corresponding parts of the file in a random access memory of the embedded system. A loading program running on a processor of the embedded system determines a destination location for at least part of the file within the embedded system without decompressing the compressed portion of the file. The invention advantageously eliminates the need for multiple file copy operations in transferring data between non-volatile memory and random access memory in an embedded system.
    Type: Grant
    Filed: June 30, 2000
    Date of Patent: February 17, 2004
    Assignee: Lucent Technologies Inc.
    Inventor: Edward L. Sutter, Jr.
  • Patent number: 6694408
    Abstract: The invention provides a system and method for executing a replacement selection algorithm embedded in each associativity of a cache memory architecture. Each associativity in a cache has an internal control logic that governs the process for replacing a cache line when a certain condition occurs, such as a presence of a TagHit. A designated set of control signals is used in an associativity control logic for corresponding with an external control logic. An associativity control logic within an associativity provides an internal capability to determine whether a TagHit condition occurs as well as volunteering the associativity for replacement. The preferred replacement algorithm is implemented using an approximation to Not the Most Recently Used Associativity (NMRU).
    Type: Grant
    Filed: May 1, 2000
    Date of Patent: February 17, 2004
    Inventors: Javier Villagomez, Mayank Gupta, Edward T. Pak
  • Publication number: 20040015660
    Abstract: A method and structure is disclosed for constraining cache line replacement that processes a cache miss in a computer system. The invention contains a K-way set associative cache that selects lines in the cache for replacement. The invention constrains the selecting process so that only a predetermined subset of each set of cache lines is selected for replacement. The subset has at least a single cache line and the set size is at least two cache lines. The invention may further select between at least two cache lines based upon which of the cache lines was accessed least recently. A selective enablement of the constraining process is based on a free space memory condition of a memory associated with the cache memory. The invention may further constrain cache line replacement based upon whether the cache miss is from a non-local node in a nonuniform-memory-access system. The invention may also process cache writes so that a predetermined subset of each set is known to be in an unmodified state.
    Type: Application
    Filed: July 22, 2002
    Publication date: January 22, 2004
    Inventors: Caroline Benveniste, Peter Franaszek, John T. Robinson, Charles Schulz
  • Patent number: 6681391
    Abstract: A method and system for installing software on a computer generates an installation order that ensures that a component required for the functioning of another component is already installed. Furthermore, it makes possible generating good installation orders to allow related components, e.g., in a software suite, to be installed close together, thus reducing disk swapping. The method and system take into account the existing configuration on a computer and allow removal of components along with dynamic reconfiguration of a computing system in response to a user's choice of an application program to launch. In accordance with the invention, preferably a developer includes information about the component's relationship with other components, e.g., a specific requirement for a preinstalled component or a requirement that a particular component not be present, thus requiring its removal.
    Type: Grant
    Filed: June 21, 2000
    Date of Patent: January 20, 2004
    Assignee: Microsoft Corporation
    Inventors: Phillip J. Marino, David V. Winkler, Crista Johnson, William M. Nelson
  • Patent number: 6681295
    Abstract: A computer system has a set-associative, multi-way cache system, in which at least one way is designated as a fast lane, and remaining way(s) are designated slow lanes. Any data that needs to be loaded into cache, but is not likely to be needed again in the future, preferably is loaded into the fast lane. Data loaded into the fast lane is earmarked for immediate replacement. Data loaded into the slow lanes preferably is data that may not needed again in the near future. Slow data is kept in cache to permit it to be reused if necessary. The high-performance mechanism of data access in a modem microprocessor is with a prefetch; data is moved, with a special prefetch instruction, into cache prior to its intended use. The prefetch instruction requires less machine resources, than carrying out the same intent with an ordinary load instruction. So, the slow-lane, fast-lane decision is accomplished by having a multiplicity of prefetch instructions.
    Type: Grant
    Filed: August 31, 2000
    Date of Patent: January 20, 2004
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen C. Root, Richard E. Kessler, David H. Asher, Brian Lilly
  • Patent number: 6681298
    Abstract: The present invention is directed towards a cache management system for a set top box that improves the loading speed of hypertext markup language (HTML) documents that are provided by web servers. The cache management system includes a set top box with a processor and memory that includes cache, where a plurality of HTML documents is stored in the cache. A cache manager manages the cache and calculates a removal factor for each of the HTML documents. The cache manager removes at least one of the HTML documents based on its removal factor until sufficient room is available for an additional HTML document. Additionally, the cache manager keeps the maximum number of relevant web pages in cache to maximize loading speed.
    Type: Grant
    Filed: July 12, 2000
    Date of Patent: January 20, 2004
    Assignee: PowerTV, Inc.
    Inventors: Victor Tso, Brian Knittel
  • Patent number: 6681297
    Abstract: A digital system is provided with a several processors (1302), a shared level two (L2) cache (1300) having several segments per entry with associated tags, and a level three (L3) physical memory. Each tag entry includes a task-ID qualifier field and a resource ID qualifier field. Data is loaded into various lines in the cache in response to cache access requests when a given cache access request misses. After loading data into the cache in response to a miss, a tag associated with the data line is set to a valid state. In addition to setting a tag to a valid state, qualifier values are stored in qualifier fields in the tag. Each qualifier value specifies a usage characteristic of data stored in an associated data line of the cache, such as a task ID. A miss counter (532) counts each miss and a monitoring task (1311) determines a miss rate for memory requests. If a selected miss rate threshold value is exceeded, the digital system is reconfigured in order to reduce the miss rate.
    Type: Grant
    Filed: August 17, 2001
    Date of Patent: January 20, 2004
    Assignee: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Dominique D'Inverno, Serge Lasserre
  • Patent number: 6675262
    Abstract: A cache coherent distributed shared memory multi-processor computer system is provided with a memory controller which includes a recall unit. The recall unit allows selective forced write-backs of dirty cache lines to the home memory. After a request is posted in the recall unit, a recall (“flush”) command is issued which forces the owner cache to write-back the dirty cache line to be flushed. The memory controller will inform the recall unit as each recall operation is completed. The recall unit operation will be interrupted when all flush requests are completed.
    Type: Grant
    Filed: June 8, 2001
    Date of Patent: January 6, 2004
    Assignee: Hewlett-Packard Company, L.P.
    Inventors: Kenneth Mark Wilson, Fong Pong, Lance Russell, Tung Nguyen, Lu Xu
  • Publication number: 20040003177
    Abstract: One embodiment of the present invention provides a system for synchronizing a cache in a computer system through a peer-to-peer refreshing operation. During operation, the system determines the age of an entry in the cache. If the age of the entry exceeds a life span for the entry, the system invalidates the entry in the cache. The system subsequently refreshes the entry by retrieving an updated version of the entry from a peer of the computer system, if possible, instead of from a centralized source for the entry.
    Type: Application
    Filed: July 1, 2002
    Publication date: January 1, 2004
    Inventor: Max K. Goff