Combined Replacement Modes Patents (Class 711/134)
  • Patent number: 7480767
    Abstract: Methods and apparatus, including computer program products, for purging an item from a cache based on the expiration of a period of time and having an associated process to generate an item purged from the cache. A program stores a first item in a cache with an indication of a process to generate the first item, schedules a validity period for the first item, and purges the first item from the cache when the validity period has expired. The validity period may be optimized to be less than a period of time after which the first item would be promoted from a first generation of the cache to a second generation of the cache and invalid objects in the first generation of the cache are freed from memory more frequently than invalid objects in the second generation of the cache.
    Type: Grant
    Filed: June 15, 2006
    Date of Patent: January 20, 2009
    Assignee: SAP AG
    Inventor: Martin Moser
  • Patent number: 7478200
    Abstract: A fractional caching method and an adaptive contents transmitting method using the same are provided. The fractional caching method includes the steps of setting up a divided location for dividing a certain object into two parts, receiving an evict request for acquiring a space in the inside of the cache, when the evict request is transmitted, dividing a plurality of objects stored in the cache into a prefix-Object located in the head of the object and a suffix-Object located in the tail of the object from the divided location, and removing only the suffix-Object of each object, wherein the divided location is set up at a size rate that a size of the prefix-Object is in inverse proportion to the number of the destination types.
    Type: Grant
    Filed: September 28, 2006
    Date of Patent: January 13, 2009
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Yong Ju Lee, Ok Gee Min, Jung Keun Kim, Jin Hwan Jeong, Choon Seo Park, Hag Young Kim, Myung Joon Kim
  • Patent number: 7478218
    Abstract: A runtime code manipulation system is provided that supports code transformations on a program while it executes. The runtime code manipulation system uses code caching technology to provide efficient and comprehensive manipulation of an application running on an operating system and hardware. The code cache includes a system for automatically keeping the code cache at an appropriate size for the current working set of an application running.
    Type: Grant
    Filed: February 17, 2006
    Date of Patent: January 13, 2009
    Assignee: VMware, Inc.
    Inventors: Derek L. Bruening, Saman P. Amarasinghe
  • Patent number: 7467260
    Abstract: An apparatus and method is disclosed for flushing a cache in a computing system. In a multinode computing system a cache in a first node may contain modified data in an address space of a second node. The cache in the first node must be purged prior to shutting down the first node. The computing system uses a random class replacement scheme for the cache. A cache flush routine sets a cache flush mode in a class replace select mechanism, overriding the random class replacement scheme. With the random class replacement scheme overridden, a minimum number of fetches will flush all the cache lines in the cache, each fetch loading the cache with a cache line not already in the cache. No additional delay penalty is incurred in a critical path through which fetches and stores to the cache must pass.
    Type: Grant
    Filed: October 8, 2004
    Date of Patent: December 16, 2008
    Assignee: International Business Machines Corporation
    Inventors: Duane Arlyn Averill, John Michael Borkenhagen, Philip Rogers Hillier, III
  • Patent number: 7463562
    Abstract: A method of recording a temporary defect list on a write-once recording medium, a method of reproducing the temporary defect list, an apparatus for recording and/or reproducing the temporary defect list, and the write-once recording medium. The method of recording a temporary defect list for defect management on a write-once recording medium includes recording the temporary defect list, which is created while data is recorded on the write-once recording medium, in at least one cluster of the write-once recording medium, and verifying if a defect is generated in the at least one cluster. Then, the method includes re-recording data originally recorded in a defective cluster in another cluster, and recording pointer information, which indicates a location of the at least one cluster where the temporary defect list is recorded, on the write-once recording medium.
    Type: Grant
    Filed: April 26, 2004
    Date of Patent: December 9, 2008
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sung-hee Hwang, Jung-wan Ko
  • Patent number: 7457921
    Abstract: A system that facilitates the storage of data using a write barrier. The system interfaces to a hardware component that stores data, and includes a write barrier component that dynamically employs instructions compatible with the hardware component to ensure data integrity during storage of the data. The write barrier component is independent of at least an operating system and an application and can operate in a least one of a user mode and a kernel mode. The write barrier component includes at least one of software instructions, routines, and methods, the selection of one or more of which is based on hardware data extracted from the hardware component. A selection component interrogates the hardware component for hardware data to facilitate selection of one or more instructions most suitable for interfacing to the hardware component. A coalescing component combines cache synchronization requests into a single set of instructions, which set is processed to flush a disk cache in one process.
    Type: Grant
    Filed: February 23, 2005
    Date of Patent: November 25, 2008
    Assignee: Microsoft Corporation
    Inventors: Henry P Gabryjelski, Krishnan Varadarajan, Peter W Wieland, Raju Ramanathan
  • Patent number: 7457918
    Abstract: Methods for a treatment of cached objects are described. In one embodiment, management of a region of a cache is configured with an eviction policy plug-in and a storage plug-in. The eviction policy plug-in includes code to evict an object that is cached in the region of cache. The storage plug-in includes code to execute a function involving a group manipulation function that gets each object within a pre-defined group of objects located within the region of cache, each object associated with a key, each key registered with the storage plug-in.
    Type: Grant
    Filed: December 28, 2004
    Date of Patent: November 25, 2008
    Assignee: SAP AG
    Inventors: Dirk Marwinski, Petio G. Petev
  • Patent number: 7457920
    Abstract: The proposed system and associated algorithm when implemented improves the processor cache miss rates and overall cache efficiency in multi-core environments in which multiple CPU's share a single cache structure (as an example). The cache efficiency will be improved by tracking CPU core loading patterns such as miss rate and minimum cache line load threshold levels. Using this information along with existing cache eviction method such as LRU, results in determining which cache line from which CPU is evicted from the shared cache when a capacity conflict arises. This methodology allows one to dynamically allocate shared cache entries to each core within the socket based on the particular core's frequency of shared cache usage.
    Type: Grant
    Filed: January 26, 2008
    Date of Patent: November 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: Marcus Lathan Kornegay, Ngan Ngoc Pham
  • Patent number: 7454573
    Abstract: A hardware based method for determining when to migrate cache lines to the cache bank closest to the requesting processor to avoid remote access penalty for future requests. In a preferred embodiment, decay counters are enhanced and used in determining the cost of retaining a line as opposed to replacing it while not losing the data. In one embodiment, a minimization of off-chip communication is sought; this may be particularly useful in a CMP environment.
    Type: Grant
    Filed: January 13, 2005
    Date of Patent: November 18, 2008
    Assignee: International Business Machines Corporation
    Inventors: Alper Buyuktosunoglu, Zhigang Hu, Jude A. Rivers, John T. Robinson, Xiaowei Shen, Vijayalakshmi Srinivasan
  • Publication number: 20080282038
    Abstract: An array of data values, such as an image of pixel values, is stored in a main memory (12). A processing operation is performed using the pixel values. The processing operation defines time points of movement of a multidimensional region (20, 22) of locations in the image. Pixel values from inside and around the region are cached for processing. At least when a cache miss occurs for a pixel value from outside the region, cache replacement of data in cache locations (142) is performed. Locations that store pixel data for locations in the image outside the region (20, 22) are selected for replacement, selectively exempting from replacement cache locations (142) that store pixel data locations in the image inside the region. In embodiments, different types of cache structure are used for caching data values inside and outside the region. In an embodiment the cache locations for pixel data inside the regions support a higher level of output parallelism than the cache locations for pixel data around the region.
    Type: Application
    Filed: April 21, 2005
    Publication date: November 13, 2008
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS, N.V.
    Inventors: Ramanathan Sethuraman, Aleksandar Beric, Carlos Antonio Alba Pinto, Harm Johannes Antonius Maria Peters, Patrick Peter Elizabeth Meuwissen, Srinivasan Balakrishnan, Gerard Veldman
  • Patent number: 7451275
    Abstract: Methods for a treatment of cached objects are described. In one embodiment, management of a region of a cache is configured with an eviction policy plug-in and a storage plug-in. The eviction policy plug-in includes code to evict an object that is cached in the region of cache. The storage plug-in includes code to execute a function involving an object manipulation function that gets a first object located within the region of cache, and an attribute manipulation function that gets a specific attribute for the first object from within the region of cache.
    Type: Grant
    Filed: December 28, 2004
    Date of Patent: November 11, 2008
    Assignee: SAP AG
    Inventors: Petio G. Petev, Michael Wintergerst
  • Patent number: 7441086
    Abstract: A data caching method and a computer-readable medium storing a program executing the method used in a cache system where a data replacing parameter is used for a data replacement rule, are provided to assist determining whether a user data has to be replaced or not. The data caching method and the program rely on the cache system to determine whether to replace the user data when the resource consumed in fetching the user data is lower than a predetermined level. However, when the resource consumed in fetching the user data is higher or equal to the predetermined level, the value of the data replacing parameter mentioned above is replaced within a predetermined period, such that the user data could be maintained in the cache system.
    Type: Grant
    Filed: September 7, 2005
    Date of Patent: October 21, 2008
    Assignee: Industrial Technology Research Institute
    Inventors: Kuang-Hui Chi, Pai-Feng Tsai, Kwo-Shine Liaw
  • Patent number: 7437516
    Abstract: Methods for a treatment of cached objects are described. In one embodiment, management of a region of a cache is configured with an eviction policy plug-in. The eviction policy plug-in includes an eviction timing component and a sorting component, with the eviction timing component including code to implement an eviction timing method, and the eviction timing method to trigger eviction of an object from the region of cache. The sorting component includes code to implement a sorting method to identify an object that is eligible for eviction from said region of cache.
    Type: Grant
    Filed: December 28, 2004
    Date of Patent: October 14, 2008
    Assignee: SAP AG
    Inventors: Michael Wintergerst, Petio G. Petev
  • Patent number: 7434247
    Abstract: The desirability of programming events may be determined using metadata for programming events that includes goodness of fit scores associated with categories of a classification hierarchy one or more of descriptive data and keyword data. The programming events are ranked in accordance with the viewing preferences of viewers as expressed in one or more viewer profiles. The viewer profiles may each include preference scores associated with categories of the classification hierarchy and may also include one or more keywords. Ranking is performed through category matching and keyword matching using the contents of the metadata and the viewer profiles. The viewer profile keywords may be qualified keywords that are associated with specific categories of the classification hierarchy. The ranking may be performed such that qualified keyword matches generally rank higher than keyword matches, and keyword matches generally rank higher than category matches.
    Type: Grant
    Filed: March 28, 2005
    Date of Patent: October 7, 2008
    Assignee: Meevee, Inc.
    Inventors: Gil Gavriel Dudkiewicz, Dale Kittrick Hitt, Jonathan Percy Barker
  • Patent number: 7430639
    Abstract: The present invention includes storing in a main memory data block tags corresponding to blocks of data previously inserted into a buffer cache memory and then evicted from the buffer cache memory or written over in the buffer cache memory. Counters associated with the tags are updated when look-up requests to look up data block tags are received from a cache look-up algorithm.
    Type: Grant
    Filed: August 26, 2005
    Date of Patent: September 30, 2008
    Assignee: Network Appliance, Inc.
    Inventors: Naveen Bali, Naresh Patel
  • Publication number: 20080235458
    Abstract: Embodiments of the present invention provide methods and systems for efficiently tracking evicted or non-resident pages. For each non-resident page, a first hash value is generated from the page's metadata, such as the page's mapping and offset parameters. This first hash value is then used as an index to point one of a plurality of circular buffers. Each circular buffer comprises an entry for a clock pointer and entries that uniquely represent non-resident pages. The clock pointer points to the next page that is suitable for replacement and moves through the circular buffer as pages are evicted. In some embodiments, the entries that uniquely represent non-resident pages are a hash value that is generated from the page's inode data.
    Type: Application
    Filed: April 29, 2008
    Publication date: September 25, 2008
    Inventor: Henri Han van RIEL
  • Patent number: 7424577
    Abstract: The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.
    Type: Grant
    Filed: August 26, 2005
    Date of Patent: September 9, 2008
    Assignee: Network Appliance, Inc.
    Inventors: Naveen Bali, Naresh Patel, Yasuhiro Endo
  • Patent number: 7418553
    Abstract: The present invention is intended to reduce unnecessary power consumption by controlling disconnection of entries unused in a translation lookaside buffer (TLB) for a long time. In an aspect of the present invention, there is provided a method of controlling electric power consumed for a translation lookaside buffer (TLB) within a central processing device having the TLB and an entry replacement mechanism wherein the TLB includes a plurality of entries and performs translation from a logical address to a physical address and the entry replacement mechanism replaces the entries of the TLB, the method including the steps of: selecting one or more entries among the plurality of entries of the TLB in accordance with one or more predefined criteria based on an output from the entry replacement mechanism, and controlling electric power supplied to the selected entries.
    Type: Grant
    Filed: March 11, 2005
    Date of Patent: August 26, 2008
    Assignee: Fujitsu Limited
    Inventor: Koichi Yoshimi
  • Patent number: 7409502
    Abstract: A processing system and method performs allocation of memory cache lines in response to a cache write miss. A processor receives a plurality of data processing instructions. A first store instruction for storing data in a system memory at a predetermined address is decoded by decoding a first specifier within the first store instruction. The first specifier determines an allocation policy for the first store instruction wherein the allocation policy determines whether to store data within the cache when the predetermined address is not within the cache. Additional store instructions are decoded. For example, a second specifier determines an allocation policy for a second store instruction. The specifier in each of the store instructions may be implemented in various forms to provide a policy indicator for each store instruction. No allocation policy may also be established on a per-access basis.
    Type: Grant
    Filed: May 11, 2006
    Date of Patent: August 5, 2008
    Assignee: Freescale Semiconductor, Inc.
    Inventors: William C. Moyer, Jeffrey W. Scott
  • Patent number: 7401186
    Abstract: Method, system and computer program product for tracking changes in an L1 data cache directory. A method for tracking changes in an L1 data cache directory determines if data to be written to the L1 data cache is to be written to an address to be changed from an old address to a new address. If it is determined that the data to be written is to be written to an address to be changed, a determination is made if the data to be written is associated with the old address or the new address. If it is determined that the data is to be written to the new address, the data is allowed to be written to the new address following a prescribed delay after the address to be changed is changed. The method is preferably implemented in a system that provides a Store Queue (STQU) design that includes a Content Addressable Memory (CAM)-based store address tracking mechanism that includes early and late write CAM ports. The method eliminates time windows and the need for an extra copy of the L1 data cache directory.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: July 15, 2008
    Assignee: International Business Machines Corporation
    Inventors: Sheldon B. Levenstein, Anthony Saporito
  • Publication number: 20080168234
    Abstract: Provided are a method, system, and article of manufacture for managing write requests in cache directed to different storage groups. A determination is made of a high and low thresholds for a plurality of storage groups configured in a storage, wherein the high and low thresholds for one storage group indicate a high and low percentage of a cache that may be used to store write requests to the storage group. A determination is made of a number of tasks to assign to the storage groups based on the determined high and low thresholds for the storage groups, wherein each task assigned to one storage group destages write requests from the cache to the storage group.
    Type: Application
    Filed: January 8, 2007
    Publication date: July 10, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Binny Sher Gill, Michael Thomas Benhase, Joseph Smith Hyde, Thomas Charles Jarvis, Bruce McNutt, Dharmendra Shantilal Modha
  • Patent number: 7398357
    Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.
    Type: Grant
    Filed: September 19, 2006
    Date of Patent: July 8, 2008
    Assignee: International Business Machines Corporation
    Inventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
  • Publication number: 20080162822
    Abstract: A memory apparatus including: a cache control section to control a cache memory for an auxiliary storage apparatus; a volatile memory; and a nonvolatile memory, wherein the cache memory for the auxiliary storage apparatus is configured to have a volatile cache memory provided in the volatile memory and a nonvolatile cache memory provided in the nonvolatile memory, and wherein the cache control section accesses the nonvolatile cache memory using a write back method.
    Type: Application
    Filed: November 15, 2007
    Publication date: July 3, 2008
    Inventors: Kenji Okuyama, Tomohiro Suzuki, Yuji Tamura, Tetsuya Ishikawa, Hiroyasu Nishimura, Tomoya Ogawa, Fumikage Uchida, Nao Moromizato, Munetoshi Eguchi
  • Patent number: 7395373
    Abstract: Embodiments of a method for reducing conflict misses in a set-associative cache by mapping each memory address to a primary set and at least one overflow set are described. If a conflict miss occurs within the primary set, a cache line from the primary set is selected for replacement. However, rather than removing the selected cache line from the cache completely, the selected cache line may instead be relocated to the overflow set. The selected cache line replaces a cache line in the overflow set, if it is determined that the selected cache line from the primary set has an estimated age that is more recent than an estimated age for any cache line in the overflow set. Embodiments of the method incorporate various techniques for estimating the age of cache lines, and, particularly, for estimating the relative time since any given cache line was last accessed.
    Type: Grant
    Filed: September 20, 2005
    Date of Patent: July 1, 2008
    Assignee: International Business Machines Corporation
    Inventor: John T. Robinson
  • Publication number: 20080147982
    Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.
    Type: Application
    Filed: September 19, 2006
    Publication date: June 19, 2008
    Applicant: International Business Machines Corporation
    Inventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
  • Publication number: 20080147983
    Abstract: A data processing system is provided comprising at least one processing unit (10) for processing data; a memory means (40) for storing data; and a cache memory means (20) for caching data stored in the memory means (40). Said cache memory means (20) is associated to at least one processing unit (10). An interconnect means (30) is provided for connecting the memory means (40) and the cache memory means (20). The cache memory means (20) is adapted for performing a cache replacement based on reduced logic level changes of the interconnect means (30) as introduced by a data transfer (DO-Dm) between the memory means (40) and the cache memory means (20).
    Type: Application
    Filed: January 27, 2006
    Publication date: June 19, 2008
    Applicant: NXP B.V.
    Inventors: Bijo Thomas, Sainath Karlapalem
  • Patent number: 7386673
    Abstract: Embodiments of the present invention provide methods and systems for efficiently tracking evicted or non-resident pages. For each non-resident page, a first hash value is generated from the page's metadata, such as the page's mapping and offset parameters. This first hash value is then used as an index to point one of a plurality of circular buffers. Each circular buffer comprises an entry for a clock pointer and entries that uniquely represent non-resident pages. The clock pointer points to the next page that is suitable for replacement and moves through the circular buffer as pages are evicted. In some embodiments, the entries that uniquely represent non-resident pages are a hash value that is generated from the page's inode data.
    Type: Grant
    Filed: November 30, 2005
    Date of Patent: June 10, 2008
    Assignee: Red Hat, Inc.
    Inventor: Henri Han van Riel
  • Patent number: 7380065
    Abstract: A method and system for improving the performance of a cache. The cache may include an array of tag entries where each tag entry includes an additional bit (“reused bit”) used to indicate whether its associated cache line has been reused, i.e., has been requested or referenced by the processor. By tracking whether a cache line has been reused, data (cache line) that may not be reused may be replaced with the new incoming cache line prior to replacing data (cache line) that may be reused. By replacing data in the cache memory that might not be reused prior to replacing data that might be reused, the cache hit may be improved thereby improving performance.
    Type: Grant
    Filed: March 30, 2005
    Date of Patent: May 27, 2008
    Assignee: International Business Machines Corporation
    Inventors: Gordon T. Davis, Santiago A. Leon, Hans-Werner Tast
  • Patent number: 7376792
    Abstract: A customizable cache discard policy is provided which reduces adverse consequences of conventional discard policies. In a data processing system, a cache controller invokes a cache data discard policy as the cache approaches its capacity. Using one possible policy, data having the shortest retrieval (fetch) time is discarded before data having longer retrieval times. In an alternative policy, data may be discarded based upon its source. Weightings may be applied based upon the distance from each source to the cache, may be based upon priorities assigned to each source, or may be based upon the type of each source.
    Type: Grant
    Filed: August 17, 2005
    Date of Patent: May 20, 2008
    Assignee: International Business Machines Corporation
    Inventor: Matthew G Borlick
  • Patent number: 7373459
    Abstract: A congestion control and avoidance method including a method check step of determining whether the request contents is cacheable or uncacheable on the basis of the request inputted from the client terminal, a first Uniform Resource Identifier (URI) check step of, when it is determined that the request contents is cacheable in the method check step, checking a URI included in the request from the client terminal to determine whether the request contents is cacheable or uncacheable, a first URI hash search step of, when it is determined that the request contents is cacheable based on determination of the first URI check step, searching a URI hash to determine to execute any of regular caching, priority caching and access limitationing operation, and a step of executing any of the regular caching, priority caching and access limitationing operation according to determination in the first URI hash search step.
    Type: Grant
    Filed: July 27, 2005
    Date of Patent: May 13, 2008
    Assignee: Hitachi, Ltd.
    Inventors: Hideo Aoki, Takashi Nishikado, Daisuke Yokota, Yasuhiro Takahashi, Fumio Noda, Yoshiteru Takeshima
  • Patent number: 7360031
    Abstract: Method and apparatus to enable I/O agents to perform atomic operations in shared, coherent memory spaces. The apparatus includes an arbitration unit, a host interface unit, and a memory interface unit. The arbitration unit provides an interface to one or more I/O agents that issue atomic transactions to access and/or modify data stored in a shared memory space accessed via the memory interface unit. The host interface unit interfaces to a front-side bus (FSB) to which one or more processors may be coupled. In response to an atomic transaction issued by an I/O agent, the transaction is forked into two interdependent processes. Under one process, an inbound write transaction is injected into the host interface unit, which then drives the FSB to cause the processor(s) to perform a cache snoop. At the same time, an inbound read transaction is injected into the memory interface unit, which retrieves a copy of the data from the shared memory space.
    Type: Grant
    Filed: June 29, 2005
    Date of Patent: April 15, 2008
    Assignee: Intel Corporation
    Inventors: Sridhar Lakshmanamurthy, Mason B. Cabot, Sameer Nanavati, Mark Rosenbluth
  • Patent number: 7356650
    Abstract: Systems and methods are provided for a data processing system and a cache arrangement. The data processing system includes at least one processor, a first-level cache, a second-level cache, and a memory arrangement. The first-level cache bypasses storing data for a memory request when a do-not-cache attribute is associated with the memory request. The second-level cache stores the data for the memory request. The second-level cache also bypasses updating of least-recently-used indicators of the second-level cache when the do-not-cache attribute is associated with the memory request.
    Type: Grant
    Filed: June 17, 2005
    Date of Patent: April 8, 2008
    Assignee: Unisys Corporation
    Inventors: Donald C. Englin, James A. Williams
  • Patent number: 7353341
    Abstract: A cache write back operation, write back modified data to memory from cache data array to fix inconsistency between them can be cancelled by the results of a comparison of the progress between a write back and snoop push or snoop kill operation. Write back is intended to make an empty slot to accommodate a reload data due to a cache miss and since a snoop push or snoop kill operation creates an invalid entry in the cache, write back is not needed. If simultaneous push or kill with write back operation exist, then write back machine is late cancelled. System performance improves due to preserving more cache lines in cache data array for possible future reuse.
    Type: Grant
    Filed: June 3, 2004
    Date of Patent: April 1, 2008
    Assignee: International Business Machines Corporation
    Inventors: Roy Moonseuk Kim, Yasukichi Okawa, Thuong Quang Truong
  • Patent number: 7346736
    Abstract: One embodiment of the present invention provides a system that selects bases to form a regression model for cache performance. During operation, the system receives empirical data for a cache rate. The system also receives derivative constraints for the cache rate. Next, the system obtains candidate bases that satisfy the derivative constraints. For each of these candidate bases, the system: (1) computes an aggregate error E incurred using the candidate basis over the empirical data; (2) computes an instability measure I of an extrapolation fit for using the candidate basis over an extrapolation region; and then (3) computes a selection criterion F for the candidate basis, wherein F is a function of E and I. Finally, the system minimizes the selection criterion F across the candidate bases to select the basis used for the regression model.
    Type: Grant
    Filed: October 3, 2005
    Date of Patent: March 18, 2008
    Assignee: Sun Microsystems, Inc.
    Inventors: Ilya Gluhovsky, David Vengerov, John R. Busch
  • Patent number: 7343471
    Abstract: Instructions of a program are stored in compressed form in a program memory (12). In a processor which executes the instructions, a program counter (50) identifies a position in the program memory. An instruction cache (40) has cache blocks, each for storing one or more instructions of the program in decompressed form. A cache loading unit (42) includes a decompression section (44) and performs a cache loading operation in which one or more compressed-form instructions are read from the position in the program memory identified by the program counter and are decompressed and stored in one of the said cache blocks of the instruction cache. A cache pointer (52) identifies a position in the instruction cache of an instruction to be fetched for execution. An instruction fetching unit (46) fetches an instruction to be executed from the position identified by the cache pointer.
    Type: Grant
    Filed: January 12, 2005
    Date of Patent: March 11, 2008
    Assignee: PTS Corporation
    Inventor: Nigel Peter Topham
  • Patent number: 7330938
    Abstract: System and method for a hybrid-cache. Data received from a data source is cached within a static cache as stable data. The static cache is a cache having a fixed size. Portions of the stable data within the static cache are evicted to a dynamic cache when the static cache becomes full. The dynamic cache is a cache having a dynamic size. The evicted portions of the stable cache are enrolled into the dynamic cache as soft data.
    Type: Grant
    Filed: May 18, 2004
    Date of Patent: February 12, 2008
    Assignee: SAP AG
    Inventors: IIiyan N. Nenov, Panayot M. Dobrikov
  • Patent number: 7321955
    Abstract: The storage control device of the present invention controls a plurality of storage devices. The storage control device comprises an LRU write-back unit writing back data stored in the cache memory of the storage control device into the plurality of storage devices by the LRU method, and a write-back schedule processing unit selecting a storage device with a small number of write-backs executed by the LRU write-back unit and writing back data into the selected storage device.
    Type: Grant
    Filed: September 8, 2004
    Date of Patent: January 22, 2008
    Assignee: Fujitsu Limited
    Inventor: Hideaki Ohmura
  • Patent number: 7321954
    Abstract: An LRU array and method for tracking the accessing of lines of an associative cache. The most recently accessed lines of the cache are identified in the table, and cache lines can be blocked from being replaced. The LRU array contains a data array having a row of data representing each line of the associative cache, having a common address portion. A first set of data for the cache line identifies the relative age of the cache line for each way with respect to every other way. A second set of data identifies whether a line of one of the ways is not to be replaced. For cache line replacement, the cache controller will select the least recently accessed line using contents of the LRU array, considering the value of the first set of data, as well as the value of the second set of data indicating whether or not a way is locked. Updates to the LRU occur after each pre-fetch or fetch of a line or when it replaces another line in the cache memory.
    Type: Grant
    Filed: August 11, 2004
    Date of Patent: January 22, 2008
    Assignee: International Business Machines Corporation
    Inventors: James N. Dieffenderfer, Richard W. Doing, Brian E. Frankel, Kenichi Tsuchiya
  • Patent number: 7315873
    Abstract: A technique for improving the efficiency of a loop detecting, reference counting storage reclamation program in a computer system. A depth value is maintained for data objects in a memory resource to indicate a distance from a global, live data object. A reference count is also maintained based on a number of objects pointing to each object. A particular object is processed by the storage reclamation program when another object that previously pointed to the particular object no longer points to it, e.g., because the object was deleted or reset to point to another object, and when the depth value of the another object is one less than the depth value of the particular object. If the particular object is determined to be live, its depth value, and the depth values of other objects it points to or “roots” are reset. If the particular object is dead, it is cleaned up.
    Type: Grant
    Filed: July 15, 2003
    Date of Patent: January 1, 2008
    Assignee: International Business Machines Corporation
    Inventor: Russell L. Lewis
  • Patent number: 7313654
    Abstract: As part of some embodiments of the present invention, there is provided a method, a circuit and a system for managing data in a cache memory of a mass data storage device and/or system. In accordance with some embodiments of the present invention, a data portion's priority in the cache may be altered. The priority of the data portion may be altered as a function of an access parameter associated with the data portion and a fetch parameter associated with the data portion.
    Type: Grant
    Filed: October 27, 2004
    Date of Patent: December 25, 2007
    Assignee: XIV Ltd
    Inventors: Ofir Zohar, Yaron Revah, Haim Helman, Dror Cohen, Shemer Schwartz
  • Patent number: 7302524
    Abstract: An apparatus and method for inhibiting data cache thrashing in a multi-threading execution mode through simulating a higher level of associativity in a data cache. The apparatus temporarily splits a data cache into multiple regions and each region is selected according to a thread ID indicator in an instruction register. The data cache is split when the apparatus is in the multi-threading execution mode indicated by an enable cache split bit.
    Type: Grant
    Filed: September 25, 2003
    Date of Patent: November 27, 2007
    Assignee: International Business Machines Corporation
    Inventor: David A. Luick
  • Patent number: 7290081
    Abstract: A ROM patching apparatus for use in a data processing system that executes instruction code stored in the ROM. The ROM patching apparatus comprises: 1) a patch buffer for storing a first replacement cache line containing a first new instruction suitable for replacing at least a portion of the code in the ROM; 2) a lockable cache; 3) core processor logic operable to read from an associated memory a patch table containing a first table entry, the first table entry containing 1) the first new instruction and 2) a first patch address identifying a first patched ROM address of the at least a portion of the code in the ROM. The core processor logic loads the first new instruction from the patch table into the patch buffer, stores the first replacement cache line from the patch buffer into the lockable cache, and locks the first replacement cache line into the lockable cache.
    Type: Grant
    Filed: May 14, 2002
    Date of Patent: October 30, 2007
    Assignee: STMicroelectronics, Inc.
    Inventors: Sivagnanam Parthasarathy, Alessandro Risso
  • Patent number: 7284096
    Abstract: Systems and methods are provided for data caching. An exemplary method for data caching may include establishing a FIFO queue and a LRU queue in a cache memory. The method may further include establishing an auxiliary FIFO queue for addresses of cache lines that have been swapped-out to an external memory. The method may further include determining, if there is a cache miss for the requested data, if there is a hit for requested data in the auxiliary FIFO queue and, if so, swapping-in the requested data into the LRU queue, otherwise swapping-in the requested data into the FIFO queue.
    Type: Grant
    Filed: August 5, 2004
    Date of Patent: October 16, 2007
    Assignee: SAP AG
    Inventor: Ivan Schreter
  • Patent number: 7277992
    Abstract: A technique for intelligently evicting cache lines within an inclusive cache architecture. More particularly, embodiments of the invention relate to a technique to evict cache lines within an inclusive cache hierarchy based on the cache coherency traffic generated between an upper level cache and lower level caches.
    Type: Grant
    Filed: March 22, 2005
    Date of Patent: October 2, 2007
    Assignee: Intel Corporation
    Inventors: Christopher J. Shannon, Mark Rowland, Ganapati Srinivasa
  • Patent number: 7260684
    Abstract: A cache management logistics controls a transfer of a trace. A first cache couples to the cache management logistics to evict the trace based on a replacement mechanism. A second cache couples to the cache management logistics to receive the trace based on a number of accesses to the trace.
    Type: Grant
    Filed: January 16, 2001
    Date of Patent: August 21, 2007
    Assignee: Intel Corporation
    Inventors: Abraham Mendelson, Roni Rosner, Ronny Ronen
  • Patent number: 7260679
    Abstract: A method is disclosed to manage a data cache. The method provides a data cache comprising a plurality of tracks, where each track comprises one or more segments. The method further maintains a first LRU list comprising one or more first tracks having a low reuse potential, maintains a second LRU list comprising one or more second tracks having a high reuse potential, and sets a target size for the first LRU list. The method then accesses a track, and determines if that accessed track comprises a first track. If the method determines that the accessed track comprises a first track, then the method increases the target size for said first LRU list. Alternatively, if the method determines that the accessed track comprises a second track, then the method decreases the target size for said first LRU list. The method demotes tracks from the first LRU list if its size exceeds the target size; otherwise, the method evicts tracks from the second LRU list.
    Type: Grant
    Filed: October 12, 2004
    Date of Patent: August 21, 2007
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Binny S. Gill, Thomas C. Jarvis, Dharmendra S. Modha
  • Patent number: 7254673
    Abstract: Apparatus, methods, and program products for storing data address a first cache and a second cache. The second cache is capable of operating in a first mode wherein data read for storage in the first cache is also stored in the second cache, and is capable of operating in a second mode wherein the data stored in the second cache does not include at least some of the data read for storage in the first cache. The data stored in the second cache includes data that has been removed from the first cache. Thus the contents of the second cache are at least partially exclusive of the contents of the first cache. The described apparatus, methods, and program products are advantageously employed in multi-level caching systems wherein the caches may be approximately the same size.
    Type: Grant
    Filed: December 30, 2004
    Date of Patent: August 7, 2007
    Assignee: EMC Corporation
    Inventor: Douglas Sullivan
  • Patent number: 7246203
    Abstract: A cache for storing data elements is disclosed. The cache includes a cache memory having one or more lines and one or more cache line counters, each associated with a line of the cache memory. In operation, a cache line counter of the one or more of cache line counters is incremented when a request is received to prefetch a data element into the cache memory and is decremented when the data element is consumed. Optionally, one or more reference queues may be used to store the locations of data elements in the cache memory. In one embodiment, data cannot be evicted from cache lines unless the associated cache line counters indicate that the prefetched data has been consumed.
    Type: Grant
    Filed: November 19, 2004
    Date of Patent: July 17, 2007
    Assignee: Motorola, Inc.
    Inventors: Kent D. Moat, Raymond B. Essick, IV, Philip E. May, James M. Norris
  • Patent number: 7237067
    Abstract: Methods for storing replacement data in a multi-way associative cache are disclosed. One method comprises logically dividing the cache's cache sets into segments of at least one cache way; searching a cache set in accordance with a segment search sequence for a segment currently comprising a way which has not yet been accessed during a current cycle of the segment search sequence; searching the current segment in accordance with a way search sequence for a way which has not yet been accessed during a current way search cycle; and storing the replacement data in a first way which has not yet been accessed during a current cycle of the way search sequence. A cache controller that performs such methods is also disclosed.
    Type: Grant
    Filed: April 22, 2004
    Date of Patent: June 26, 2007
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Simon C. Steely, Jr.
  • Patent number: 7228386
    Abstract: A cache may be programmed to disable one or more entries from allocation for storing memory data (e.g. in response to a memory transaction which misses the cache). Furthermore, the cache may be programmed to select which entries of the cache are disabled from allocation. Since the disabled entries are not allocated to store memory data, the data stored in the entries at the time the cache is programmed to disable the entries may remain in the cache. In one specific implementation, the cache also provides for direct access to entries in response to direct access transactions.
    Type: Grant
    Filed: September 24, 2004
    Date of Patent: June 5, 2007
    Assignee: Broadcom Corporation
    Inventors: Joseph B. Rowlands, James B. Keller