Combined Replacement Modes Patents (Class 711/134)
-
Patent number: 7222220Abstract: A multiprocessor computer system is configured to selectively transmit address transactions through an address network using either a broadcast mode or a point-to-point mode transparent to the active devices that initiate the transactions. Depending on the mode of transmission selected, either a directory-based coherency protocol or a broadcast snooping coherency protocol is implemented to maintain coherency within the system. A computing node is formed by a group of clients which share a common address and data network. The address network is configured to determine whether a particular transaction is to be conveyed in broadcast mode or point-to-point mode. In one embodiment, the address network includes a mode table with entries which are configurable to indicate transmission modes corresponding to different regions of the address space within the node.Type: GrantFiled: June 23, 2003Date of Patent: May 22, 2007Assignee: Sun Microsystems, Inc.Inventors: Robert E. Cypher, Ashok Singhal
-
Patent number: 7194587Abstract: A microprocessor and a related compiler support a local cache block flush instruction in which an execution unit of a processor determines an effective address. The processor forces all pending references to a cache block corresponding to the determined effective address to commit to the cache subsystem. If the referenced cache line is modified in the local cache (the cache subsystem corresponding to the processor executing the instruction), it is then written back to main memory. If the referenced block is valid in the local cache it is invalidated, but only in the local cache. If the referenced block is not valid in the local cache, there is no invalidation. Remote processors receiving a local cache block flush instruction from another processor via the system ignore the instruction.Type: GrantFiled: April 24, 2003Date of Patent: March 20, 2007Assignee: International Business Machines Corp.Inventors: John David McCalpin, Balaram Sinharoy, Dereck Edward Williams, Kenneth Lee Wright
-
Patent number: 7177986Abstract: A cache is configured to receive direct access transactions. Each direct access transaction explicitly specifies a cache storage entry to be accessed in response to the transaction. The cache may access the cache storage entry (bypassing the normal tag comparisons and hit determination used for memory transactions) and either read the data from the cache storage entry (for read transactions) or write data from the transaction to the cache storage entry (for write transactions). The direct access transactions may, for example, be used to perform testing of the cache memory. As another example, direct access transactions may be used to perform a reset of the cache (by writing known data to each cache entry). In embodiments employing error checking and correction (ECC) mechanisms, direct access write transactions could also be used to recover from uncorrectable ECC errors, by overwriting the failing data to eliminate the errant data.Type: GrantFiled: December 30, 2003Date of Patent: February 13, 2007Assignee: Broadcom CorporationInventors: Joseph B. Rowlands, Michael P. Dickman
-
Patent number: 7177987Abstract: Systems and method are disclosed for providing responses for different cache coherency protocols. One embodiment may comprise a system that includes a first node employing a first cache coherency protocol. A detector associated with the first node detects a condition based on responses provided by the first node to requests provided according to a second cache coherency protocol, the second cache coherency protocol being different from the first cache coherency protocol. The first node provides a response to a given one of the requests to the first node that varies based on the condition detected by the detector.Type: GrantFiled: January 20, 2004Date of Patent: February 13, 2007Assignee: Hewlett-Packard Development Company, L.P.Inventors: Stephen R. Van Doren, Gregory Edward Tierney, Simon C. Steely, Jr.
-
Patent number: 7167952Abstract: A method of writing to cache including initiating a write operation to a cache. In a first operational mode, the presence or absence of a write miss is detected and if a write miss is absent, writing data to the cache and if a write miss is present, retrieving the data from a further memory and writing the data to the cache based on least recently used logic. In a second operational mode, the cache is placed in a memory mode and the data is written to the cache based on an address regardless of whether a write miss is present or absent.Type: GrantFiled: September 17, 2003Date of Patent: January 23, 2007Assignee: International Business Machines CorporationInventors: Krishna M. Desai, Anil S. Keste, Tin-chee Lo, Thomas D. Needham, Yuk-Ming Ng, Jeffrey M. Turner
-
Patent number: 7155645Abstract: A system for testing a memory page of a computer while an operating system is active. The system includes a hook function and a pattern generator. The hook function has software instructions that takes the place of a memory allocation/release scheme of the operating system. The system stores a test pattern generated by the pattern generator in the memory page upon receiving a request to release the memory page. Upon receiving a request to allocate the memory page, the system verifies the test pattern is correct to ensure the memory page is not defective. If the test pattern is incorrect, the defective memory page is removed from service.Type: GrantFiled: May 24, 2004Date of Patent: December 26, 2006Assignee: PC-Doctor, Inc.Inventor: Aki Korhonen
-
Patent number: 7133971Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.Type: GrantFiled: November 21, 2003Date of Patent: November 7, 2006Assignee: International Business Machines CorporationInventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
-
Patent number: 7120751Abstract: A streaming media cache comprises a mass storage device configured to store streaming media data, a cache memory coupled to the mass storage device, the cache memory configured to store a subset of the streaming media data in a plurality of locations, and configured to provide the subset of the streaming media data to the processor, and a processor coupled to the mass storage device and to the cache memory, the processor configured to use a first retirement algorithm to determine a first location within the cache memory that is to be retired, configured to copy data from the mass storage device to the first location within the cache memory, configured to monitor a cache memory age, wherein the cache memory age is determined in response to an age of data in at least a second location within the cache memory, configured to use a second retirement algorithm to determine a third location within the cache memory that is to be retired when the cache memory age falls below a threshold age, and configured to copy datType: GrantFiled: August 9, 2002Date of Patent: October 10, 2006Assignee: Networks Appliance, Inc.Inventors: Yasahiro Endo, Konstantinos Roussos
-
Patent number: 7117322Abstract: Provided are a method, system, and program for managing retention of stored objects. A modification request is received with respect to a stored object. A determination is made as to whether a retention protection mechanism is set and a storage policy associated with the stored object is processed to determine whether the stored object has expired according to the storage policy in response to determining that the retention protection mechanism is set. The modification request is allowed to proceed in response to determining that the stored object has expired.Type: GrantFiled: September 8, 2003Date of Patent: October 3, 2006Assignee: International Business Machines CorporationInventors: Avishai Haim Hochberg, Toby Lyn Marek, David Maxwell Cannon, Howard Newton Martin, Donald Paul Warren, Jr., Mark Alan Haye
-
Patent number: 7107416Abstract: Provided are a method, system, and program for receiving a request to remove a record. A determination is made as to whether a state associated with the record includes at least one hold state and whether the state associated with the record includes at least a retention period that has not expired. The request to remove the record is denied in response to determining that the state associated with the record includes at least one of at least one hold state and one retention period that has not expired.Type: GrantFiled: December 15, 2003Date of Patent: September 12, 2006Assignee: International Business Machines CorporationInventors: Alan L. Stuart, Toby Lyn Marek
-
Patent number: 7103723Abstract: An arrangement is provided for improving the performance of a computing system, specifically for improving the efficiency of code cache management for a system running platform-independent programs with a small memory footprint. The code cache of such a system is continuously monitored during runtime. When a condition warrants performing code cache management, the priority-based code cache management is performed based on selective code garbage collection. The code garbage collection is conducted selectively for dead methods in the code cache based on probabilities of the dead methods being reused.Type: GrantFiled: February 25, 2003Date of Patent: September 5, 2006Assignee: Intel CorporationInventor: Michal Cierniak
-
Patent number: 7103722Abstract: A method and structure is disclosed for constraining cache line replacement that processes a cache miss in a computer system. The invention contains a K-way set associative cache that selects lines in the cache for replacement. The invention constrains the selecting process so that only a predetermined subset of each set of cache lines is selected for replacement. The subset has at least a single cache line and the set size is at least two cache lines. The invention may further select between at least two cache lines based upon which of the cache lines was accessed least recently. A selective enablement of the constraining process is based on a free space memory condition of a memory associated with the cache memory. The invention may further constrain cache line replacement based upon whether the cache miss is from a non-local node in a nonuniform-memory-access system. The invention may also process cache writes so that a predetermined subset of each set is known to be in an unmodified state.Type: GrantFiled: July 22, 2002Date of Patent: September 5, 2006Assignee: International Business Machines CorporationInventors: Caroline Benveniste, Peter Franaszek, John T. Robinson, Charles Schulz
-
Patent number: 7103721Abstract: An improved method and apparatus for selecting invalid members as victims in a least recently used cache system. An invalid cache line selection unit has an input connected to a cache directory and an output connected to a most recently used update logic. In response to a miss in the cache, an invalid cache line is identified from information in the cache directory by the invalid cache line selection unit. This invalid cache line is updated to be the next victim by the most recently used update logic, rather than attempting to override the current victim selection by a least recently used victim selection logic. The next victim also may be selected in response to a cache hit in which information from the cache directory also is read.Type: GrantFiled: April 28, 2003Date of Patent: September 5, 2006Assignee: International Business Machines CorporationInventors: Robert Alan Cargnoni, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
-
Patent number: 7099998Abstract: A method for reducing an importance level of a line in a memory of a cache. An instruction is provided to the cache, the instruction indicating that the line is a candidate for replacement. The importance level of the line may then be reduced based on the instruction. The method may increase cache hit rate and, hence, microprocessor performance.Type: GrantFiled: March 31, 2000Date of Patent: August 29, 2006Assignee: Intel CorporationInventor: Ariel Berkovits
-
Patent number: 7096321Abstract: A method, system, and program storage medium for adaptively managing pages in a cache memory included within a system having a variable workload, comprising arranging a cache memory included within a system into a circular buffer; maintaining a pointer that rotates around the circular buffer; maintaining a bit for each page in the circular buffer, wherein a bit value 0 indicates that the page was not accessed by the system since a last time that the pointer traversed over the page, and a hit value 1 indicates that the page has been accessed since the last time the pointer traversed over the page; and dynamically controlling a distribution of a number of pages in the cache memory that are marked with bit 0 in response to a variable workload in order to increase a hit ratio of the cache memory.Type: GrantFiled: October 21, 2003Date of Patent: August 22, 2006Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 7085896Abstract: An apparatus for implementing a least-recently used (LRU) mechanism in a multi-port cache memory includes an LRU array and a shift decoder. The LRU array has multiple entries. The shift decoder includes a shifting means for shifting the entries within the LRU array. The shifting means shifts a current one of the entries and adjacent entries once, and loading new address, in response to a single cache hit in the current one of the entries. The shifting means shifts a current one of the entries and adjacent entries once, and loading an address of only one of multiple requesters into the most-recently used (MRU) entry, in response to multiple cache hits in the current one of the entries. The shifting means shifts all subsequent entries, including the current entries, n times, and loading addresses of all requesters contributed to the multiple cache hits in consecutive entries into the MRU entry and subsequent entries, in response to multiple cache hits in consecutive entries.Type: GrantFiled: April 30, 2003Date of Patent: August 1, 2006Assignee: International Business Machines CorporationInventors: Andrew James Bianchi, Jose Angel Paredes
-
Patent number: 7085888Abstract: A cache class in a software-administered cache of a multiprocessor is assigned cache space that is localized to a single region of a memory and is contiguous. Synchronization and LRU operations can step sequentially through the given region, removing the need for SLB searches or the penalty for a miss, while other threads remain random access. The threads that manage each virtual memory area can then be attached to specific processors, maintaining physical locality as well.Type: GrantFiled: October 9, 2003Date of Patent: August 1, 2006Assignee: International Business Machines CorporationInventor: Zachary Merlynn Loafman
-
Patent number: 7073027Abstract: Controlling a cache of distributed data is provided by dynamically determining whether and/or where to cache the distributed data based on characteristics of the data, characteristics of the source of the data and characteristics of the cache so as to provide an indication of whether to cache the data. The data may be selectively cached based on the indication.Type: GrantFiled: July 11, 2003Date of Patent: July 4, 2006Assignee: International Business Machines CorporationInventors: Gennaro A. Cuomo, Brian K. Martin
-
Patent number: 7073030Abstract: A method and apparatus for increasing the processing speed of processors and increasing the data hit ratio is disclosed herein. The method increases the processing speed by providing a non-L1 instruction caching that uses prefetch to increase the hit ratio. Cache lines in a cache set are buffered, wherein the cache lines have a parameter indicating data selection characteristics associated with each buffered cache line. Then which buffered cache lines to cast out and/or invalidate is determined based upon the parameter indicating data selection characteristics.Type: GrantFiled: May 22, 2002Date of Patent: July 4, 2006Assignee: International Business Machines CorporationInventors: Michael Joseph Azevedo, Carol Spanel, Andrew Dale Walls
-
Patent number: 7069390Abstract: The present invention provides for a plurality of partitioned ways of an associative cache. A pseudo-least recently used binary tree is provided, as is a way partition binary tree, and signals are derived from the way partition binary tree as a function of a mapped partition. Signals from the way partition binary tree and the pseudo-least recently used binary tree are combined. A cache line replacement signal is employable to select one way of a partition as a function of the pseudo-least recently used binary tree and the signals derived from the way partition binary tree.Type: GrantFiled: September 4, 2003Date of Patent: June 27, 2006Assignee: International Business Machines CorporationInventors: Wen-Tzer Thomas Chen, Peichun Peter Liu, Kevin C. Stelzer
-
Patent number: 7062607Abstract: Power conservation may be achieved in a front end system by disabling a segment builder unless program flow indicates a sufficient likelihood of segment reuse. Power normally spent in collecting decoded instructions, detecting segment beginning and end conditions and storing instruction segments is conserved by disabling those circuits that perform these functions. An access filter may maintain a running count of the number of times instructions are read from an instruction cache and may enable the segment construction and storage circuits if the running count meets or exceeds a predetermined threshold.Type: GrantFiled: September 24, 2001Date of Patent: June 13, 2006Assignee: Intel CorporationInventors: Baruch Solomon, Ronny Ronen
-
Patent number: 7058766Abstract: A method for adaptively managing pages in a cache memory with a variable workload comprises defining a cache memory; organizing the cache into disjoint lists of pages, wherein the lists comprise lists T1, T2, B1, and B2; maintaining a bit that is set to either “S” or “L” for every page in the cache, which indicates whether the bit has short-term utility or long-term utility; ensuring that each member page of T1 is marked either as “S” or “L”, wherein each member page of T1 and B1 is marked as “S” and each member page of T2 and B2 is marked as “L”; and maintaining a temporal locality window parameter such that pages that are re-requested within a window are of short-term utility and pages that are re-requested outside the window are of long-term utility, wherein the cache comprises pages that are members of any of lists T1 and T2.Type: GrantFiled: October 21, 2003Date of Patent: June 6, 2006Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 7055002Abstract: A method of reducing errors in a cache memory of a computer system (e.g., an L2 cache) by periodically issuing a series of purge commands to the L2 cache, sequentially flushing cache lines from the L2 cache to an L3 cache in response to the purge commands, and correcting errors (single-bit) in the cache lines as they are flushed to the L3 cache. Purge commands are issued only when the processor cores associated with the L2 cache have an idle cycle available in a store pipe to the cache. The flush rate of the purge commands can be programmably set, and the purge mechanism can be implemented either in software running on the computer system, or in hardware integrated with the L2 cache. In the case of the software, the purge mechanism can be incorporated into the operating system. In the case of hardware, a purge engine can be provided which advantageously utilizes the store pipe that is provided between the L1 and L2 caches.Type: GrantFiled: April 25, 2003Date of Patent: May 30, 2006Assignee: International Business Machines CorporationInventors: Robert Alan Cargnoni, Guy Lynn Guthrie, Kevin Franklin Reick, Derek Edward Williams
-
Patent number: 7055003Abstract: A method of reducing errors in a cache memory of a computer system (e.g., an L2 cache) by periodically issuing a series of purge commands to the L2 cache, sequentially flushing cache lines from the L2 cache to an L3 cache in response to the purge commands, and correcting errors (single-bit) in the cache lines as they are flushed to the L3 cache. Purge commands are issued only when the processor cores associated with the L2 cache have an idle cycle available in a store pipe to the cache. The flush rate of the purge commands can be programmably set, and the purge mechanism can be implemented either in software running on the computer system, or in hardware integrated with the L2 cache. In the case of the software, the purge mechanism can be incorporated into the operating system. In the case of hardware, a purge engine can be provided which advantageously utilizes the store pipe that is provided between the L1 and L2 caches.Type: GrantFiled: April 25, 2003Date of Patent: May 30, 2006Assignee: International Business Machines CorporationInventors: Robert Alan Cargnoni, Guy Lynn Guthrie, Harmony Lynn Helterhoff, Kevin Franklin Reick
-
Patent number: 7055004Abstract: The present invention provides for a cache-accessing system employing a binary tree with decision nodes. A cache comprising a plurality of sets is provided. A locking or streaming replacement strategy is employed for individual sets of the cache. A replacement management table is also provided. The replacement management table is employable for managing a replacement policy of information associated with the plurality of sets. A pseudo least recently used function is employed to determine the least recently used set of the cache, for such reasons as set replacement. An override signal line is also provided. The override signal is employable to enable an overwrite of a decision node of the binary tree. A value signal is also provided. The value signal is employable to overwrite the decision node of the binary tree.Type: GrantFiled: September 4, 2003Date of Patent: May 30, 2006Assignee: International Business Machines CorporationInventors: Jonathan James DeMent, Ronald Hall, Peichun Peter Liu, Thuong Quang Truong
-
Patent number: 7051161Abstract: Admission of new objects into a memory such as a web cache is selectively controlled. If an object is not in the cache, but has been requested a specified number of prior occasions (e.g., if the object has been requested at least once before), it is admitted into the cache regardless of size. If the object has not previously been requested the specified number of times, the object is admitted into the cache if the object satisfies a specified size criterion (e.g., if it is smaller than the average size of objects currently stored in the cache). To make room for new objects, other objects are evicted from the cache on, e.g., a Least Recently Used (LRU) basis. The invention could be implemented on existing web caches, on distributed web caches, in client-side web caching, and in contexts unrelated to web object caching.Type: GrantFiled: September 17, 2002Date of Patent: May 23, 2006Assignee: Nokia CorporationInventors: Sudhir Dixit, Tau Wu
-
Patent number: 7039751Abstract: A plurality of cache addressing functions are stored in main memory. A processor which executes a program selects one of the stored cache addressing functions for use in a caching operation during execution of a program by the processor.Type: GrantFiled: June 4, 2004Date of Patent: May 2, 2006Assignee: Micron Technology, Inc.Inventor: Ole Bentz
-
Patent number: 7032078Abstract: A multiprocessor computer system to selectively transmit address transactions using a broadcast mode or a point-to-point mode. Either a directory-based coherency protocol or a broadcast snooping coherency protocol is implemented to maintain coherency. A node is formed by a group of clients which share a common address and data network. The address network determines whether a transaction is conveyed in broadcast mode or point-to-point mode. The address network includes a table with entries which indicate transmission modes corresponding to different regions of the address space within the node. Upon receiving a coherence request transaction, the address network may access the table to determine the transmission mode which corresponds to the received transaction. Network congestion may be monitored and transmission modes adjusted accordingly. When network utilization is high, the number of transactions which are broadcast may be reduced.Type: GrantFiled: May 1, 2002Date of Patent: April 18, 2006Assignee: Sun Microsystems, Inc.Inventors: Robert Cypher, Ashok Singhal
-
Patent number: 7032075Abstract: In a central processing unit, caching is carried out for an instruction cache tag memory, so that, without making modifications to a conventional instruction cache controller, the number of times of access to the instruction cache tag memory, which consumes a large amount of electric power, is reduced, and low electric power consumption is attained.Type: GrantFiled: February 25, 2003Date of Patent: April 18, 2006Assignee: Kabushiki Kaisha ToshibaInventor: Isao Katayama
-
Patent number: 7028144Abstract: A method and apparatus for a microprocessor with a cache that has the advantages given by a victim cache without physically having a victim cache is disclosed. In one embodiment, a victim flag may be associated with each way in a set. At eviction time, the way whose victim flag is true may be evicted. However, the victim flag may be reset to false if a superceding request arrives for the cache line in that way. Another cache line in another way may then have its victim flag made true.Type: GrantFiled: October 28, 2003Date of Patent: April 11, 2006Assignee: Intel CorporationInventors: William G. Auld, Zhong-Ning Cai
-
Patent number: 7024521Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster that are cached in remote clusters. Techniques are provided for managing eviction of entries in the cache coherence directory by locking memory lines in a home cluster without causing a memory controller to generate probes to processors in the home cluster.Type: GrantFiled: April 24, 2003Date of Patent: April 4, 2006Assignee: Newisys, INCInventor: David B. Glasco
-
Patent number: 7020750Abstract: A hybrid system for updating cache including a first computer system coupled to a database accessible by a second computer system, said second computer system including a cache, a cache update controller for concurrently implementing a user defined cache update policy, including both notification based cache updates and periodic based cache updates, wherein said cache updates enforce data coherency between said database and said cache, and a graphical user interface for selecting between said notification based cache updates and said periodic based cache updates.Type: GrantFiled: September 17, 2002Date of Patent: March 28, 2006Assignee: Sun Microsystems, Inc.Inventors: Pirasenna Thiyagaranjan, Krishnendu Chakraborty, Peter D. Stout, Xuesi Dong
-
Patent number: 7000076Abstract: A random number generator circuit includes a primary circuit configured to generate a value within a first range and a secondary circuit configured to generate a value within a second range. A detector circuit detects whether or not the value from the primary circuit is within the desired output range for the random number generator circuit, and selects either the value from the primary circuit or the value from the secondary circuit in response. The second range is the desired output range and the first range encompasses the second range. In one embodiment, the primary circuit has complex harmonics but may generate values outside the desired range. The secondary circuit may have less complex harmonics, but may generate values only within the desired range. In one implementation, the random number generator circuit is used to generate a replacement way for a cache.Type: GrantFiled: June 4, 2004Date of Patent: February 14, 2006Assignee: Broadcom CorporationInventors: Joseph B. Rowlands, Chun H. Ning
-
Patent number: 6996679Abstract: A method and apparatus in a data processing system for protecting against displacement of two types of cache lines using a least recently used cache management process. A first member in a class of cache lines is selected as a first substitute victim. The first substitute victim is unselectable by the least recently used cache management process, and the second substitute victim is associated with a selected member in the class of cache lines. A second member in the class of cache lines is selected as a second substitute victim. The second victim is unselectable by the least recently used cache management process, and the second substitute victim is associated with the selected member in the class of cache lines. One of the first or second substitute victims are replaced in response to a selection of the selected member as a victim when a cache miss occurs, wherein the selected member remains in the class of cache lines.Type: GrantFiled: April 28, 2003Date of Patent: February 7, 2006Assignee: International Business Machines CorporationInventors: Robert Alan Cargnoni, Guy Lynn Guthrie, William John Starke
-
Patent number: 6993628Abstract: A method and apparatus in a data processing system for protecting against a displacement of one type of cache line using a least recently used cache management process. A first member in a class of cache lines is selected as a substitute victim. The substitute victim is unselectable by the least-recently-used cache management process, and the substitute victim is associated with a second member in the class of cache lines. The substitute victim is replaced in response to a selection of the second member as a victim in response to a cache miss in the data processing system, wherein the second member remains in the class of cache lines.Type: GrantFiled: April 28, 2003Date of Patent: January 31, 2006Assignee: International Business Machines CorporationInventor: William John Starke
-
Patent number: 6990557Abstract: A cache memory for use in a multithreaded processor includes a number of set-associative thread caches, with one or more of the thread caches each implementing a thread-based eviction process that reduces the amount of replacement policy storage required in the cache memory. At least a given one of the thread caches in an illustrative embodiment includes a memory array having multiple sets of memory locations, and a directory for storing tags each corresponding to at least a portion of a particular address of one of the memory locations. The directory has multiple entries each storing multiple ones of the tags, such that if there are n sets of memory locations in the memory array, there are n tags associated with each directory entry. The directory is utilized in implementing a set-associative address mapping between access requests and memory locations of the memory array.Type: GrantFiled: June 4, 2002Date of Patent: January 24, 2006Assignee: Sandbridge Technologies, Inc.Inventors: Erdem Hokenek, C. John Glossner, Arthur Joseph Hoane, Mayan Moudgill, Shenghong Wang
-
Patent number: 6986001Abstract: A system for approximating a least recently used (LRU) algorithm for memory replacement in a cache memory. In one system example, the cache memory comprises memory blocks allocated into sets of N memory blocks. The N memory blocks are allocated as M super-ways of N/M memory blocks where N is greater than M. An index identifies the set of N memory blocks. A super-way hit/replacement tracking state machine tracks hits and replacements to each super-way and maintains state corresponding to an order of hits and replacements for each super-way where the super-ways are ordered from the MRU to the LRU. Storage for the state bits is associated with each index entry where the state bits include code bits associated with a memory block to be replaced within a LRU super-way. LRU logic is coupled to the super-way hit/replacement tracking state machine to select an LRU super-way as a function of the super-way hit and replacement history.Type: GrantFiled: October 21, 2002Date of Patent: January 10, 2006Assignee: Silicon Graphics, Inc.Inventor: David X. Zhang
-
Patent number: 6981119Abstract: A memory system may use the storage space freed by compressing a unit of data to store performance-enhancing data associated with that unit of data. For example, a memory controller may be configured to allocate several of storage locations within a memory to store a unit of data. If the unit of data is compressed, the unit of data may not occupy a portion of the storage locations allocated to it. The memory controller may store performance-enhancing data associated with the unit of data in the portion of the storage locations allocated to but not occupied by the first unit of data.Type: GrantFiled: August 29, 2002Date of Patent: December 27, 2005Assignee: Advanced Micro Devices, Inc.Inventors: Kevin Michael Lepak, Benjamin Thomas Sander
-
Patent number: 6976127Abstract: A memory system includes a memory cache responsive to a single processing unit. The memory cache is arrangeable to include a first independently cached area assigned to store a first number of data packets based on a first processing unit context, and a second independently cached area assigned to store a second number of data packets based on a second processing unit context. A memory control system is coupled to the memory cache, and is configured to arrange the first independently cached area and the second independently cached area in such a manner that the first number of data packets and the second number of data packets coexist in the memory cache and are available for transfer between the memory cache and the single processing unit.Type: GrantFiled: July 14, 2003Date of Patent: December 13, 2005Assignees: Sony Corporation, Sony Electronics Inc.Inventor: Thomas Patrick Dawson
-
Patent number: 6973540Abstract: In a multi-way cache, a method for selecting N ways available for replacement includes providing a plurality of rulesets where each one of the plurality of rulesets specifies N ways in the cache that are available for replacement (where N is equal to or greater than zero). The method further includes receiving an access address, and using at least a portion of the access address to select one of the plurality of rulesets. The selected one of the plurality of rulesets may then be used to select N ways in that cache that are available for replacement. One embodiment uses the high order bits of the access address to select a ruleset. An alternate embodiment uses at least a portion of the access address and a ruleset selector control register to select the ruleset. Yet another embodiment uses the access address and address range comparators to select the ruleset.Type: GrantFiled: July 25, 2003Date of Patent: December 6, 2005Assignee: Freescale Semiconductor, Inc.Inventors: William C. Moyer, John J. Vaglica
-
Patent number: 6973665Abstract: The desirability of programming events may be determined using metadata for programming events that includes goodness of fit scores associated with categories of a classification hierarchy one or more of descriptive data and keyword data. The programming events are ranked in accordance with the viewing preferences of viewers as expressed in one or more viewer profiles. The viewer profiles may each include preference scores associated with categories of the classification hierarchy and may also include one or more keywords. Ranking is performed through category matching and keyword matching using the contents of the metadata and the viewer profiles. The viewer profile keywords may be qualified keywords that are associated with specific categories of the classification hierarchy. The ranking may be performed such that qualified keyword matches generally rank higher than keyword matches, and keyword matches generally rank higher than category matches.Type: GrantFiled: November 16, 2001Date of Patent: December 6, 2005Assignee: MYDTV, Inc.Inventors: Gil Gavriel Dudkiewicz, Dale Kittrick Hitt, Jonathan Percy Barker
-
Patent number: 6961827Abstract: The present invention provides a method and apparatus for invalidating a victimized entry. The apparatus comprises a directory cache adapted to store one or more cache entries, and a control unit. The control unit is adapted to determine whether it is desirable to remove a shared cache entry from the directory cache, and invalidate the shared cache entry in response to determining that it is desirable to remove the shared cache entry from the directory cache.Type: GrantFiled: November 13, 2001Date of Patent: November 1, 2005Assignee: Sun Microsystems, Inc.Inventors: Patricia Shanahan, Andrew E. Phelps, Nicholas E. Aneshansley
-
Patent number: 6961823Abstract: An apparatus and method for prefetching cache data in response to data requests. The prefetching uses the memory addresses of requested data to search for other data, from a related address, in a cache. This, or other data, may then be prefetched based on the result of the search.Type: GrantFiled: July 29, 2003Date of Patent: November 1, 2005Assignee: Intel CorporationInventors: Herbert Hing-Jing Hum, Zohar Bogin
-
Patent number: 6958757Abstract: The method of one embodiment for the invention is for the CPU to read a subset of consecutive pixels from RAM and cache each such pixel in the WC Cache (and load corresponding blocks into the L2 Cache). These reads and loads continue until the capacity of the L2 Cache is reached, and then these blocks (a “band”) are iteratively processed until the entire band in the L2 Cache has been written to the frame buffer via the WC Cache. Once this is complete, the process then “dumps” the L2 Cache (that is, it ignores the existing blocks and allows them to be naturally pushed out with subsequent loads) and the next band of consecutive pixels is read (and their blocks loaded). This process continues until the portrait-oriented graphic is entirely loaded.Type: GrantFiled: July 18, 2003Date of Patent: October 25, 2005Assignee: Microsoft CorporationInventor: Donald David Karlov
-
Patent number: 6959363Abstract: A cache memory comprises a fetch engine arranged to issue fetch requests for accessing data items from locations in a main memory identified by access addresses in a program being executed, a pre-fetch engine controlled to issue pre-fetch requests for speculatively accessing pre-fetch data items from locations in said main memory identified by addresses which are determined as being a number of locations from respective ones of said access addresses, and a calibrator arranged to selectively vary said number of locations.Type: GrantFiled: October 22, 2002Date of Patent: October 25, 2005Assignee: STMicroelectronics LimitedInventors: Trefor Southwell, Peter Hedinger
-
Patent number: 6950904Abstract: A cache way replacement technique to identify and replace a least-recently used cache way. A cache way replacement technique in which a least-recently used cache way is identified and replaced, such that the replacement of cache ways over time is substantially evenly distributed among a set of cache ways in a cache memory. A least-recently used cache way is identified in a cache memory having a non-binary number of cache ways.Type: GrantFiled: June 25, 2002Date of Patent: September 27, 2005Assignee: Intel CorporationInventors: Todd D. Erdner, Bradley G. Burgess, Heather L. Hanson
-
Patent number: 6922754Abstract: A method and system directed to reducing the bottleneck to storage. In one aspect of the invention, a data-aware data flow manager is inserted between storage and a process or device requesting access to the storage. The data-aware data flow manager determines which data to cache and which data to pipe directly through. Through intelligent management and caching of data flow, the data-aware data flow manager is able to avoiding some of the latencies associated with caches that front storage devices. The data-aware data flow manager may determine whether to cache data or pipe it directly through based on many factors including type of data requested, state of cache, and user or system policies.Type: GrantFiled: December 8, 2003Date of Patent: July 26, 2005Assignee: inFabric Technologies, Inc.Inventors: Wei Liu, Steven H. Kahle
-
Patent number: 6918020Abstract: In one embodiment, a method is provided. The method of this embodiment may include determining whether requested data is stored in a memory. If the requested data is not stored in the memory, the method may include determining whether a plurality of requests to access the requested data have occurred during a predetermined number of most recent data accesses. If the plurality of requests to access the requested data have occurred during the predetermined number of most recent data accesses, the method may also include storing the requested data in the memory. Of course, many variations, modifications, and alternatives are possible without departing from this embodiment.Type: GrantFiled: August 30, 2002Date of Patent: July 12, 2005Assignee: Intel CorporationInventors: Joseph S. Cavallo, Stephen J. Ippolito
-
Patent number: 6915386Abstract: A method and system for processing Service Level Agreement (SLA) terms in a caching component in a storage system. The method can include monitoring cache performance for groups of data in the cache, each the group having a corresponding SLA. Overfunded SLAs can be identified according to the monitored cache performance. In consequence, an entry can be evicted from among one of the groups which correspond to an identified one of the overfunded SLAs. In one aspect of the present invention, the most overfunded SLA can be identified, and an entry can be evicted from among the group which corresponds to the most overfunded SLA.Type: GrantFiled: June 5, 2002Date of Patent: July 5, 2005Assignee: Internation Business Machines CorporationInventors: Ronald P. Doyle, David L. Kaminsky, David M. Ogle
-
Patent number: 6912623Abstract: A cache memory for use in a multithreaded processor includes a number of set-associative thread caches, with one or more of the thread caches each implementing an eviction process based on access request address that reduces the amount of replacement policy storage required in the cache memory. At least a given one of the thread caches in an illustrative embodiment includes a memory array having multiple sets of memory locations, and a directory for storing tags each corresponding to at least a portion of a particular address of one of the memory locations. The directory has multiple entries each storing multiple ones of the tags, such that if there are n sets of memory locations in the memory array, there are n tags associated with each directory entry. The directory is utilized in implementing a set-associative address mapping between access requests and memory locations of the memory array.Type: GrantFiled: June 4, 2002Date of Patent: June 28, 2005Assignee: Sandbridge Technologies, Inc.Inventors: Erdem Hokenek, C. John Glossner, Arthur Joseph Hoane, Mayan Moudgill, Shenghong Wang