Partitioned Cache Patents (Class 711/129)
  • Patent number: 6516387
    Abstract: A set-associative cache having a selectively configurable split/unified mode. The cache may comprise a memory and control logic. The memory may be configured for storing data buffered by the cache. The control logic may be configured for controlling the writing and reading of data to and from the memory. The control logic may organise the memory as a plurality of storage sets, each set being mapped to a respective plurality of external addresses such that data from any of said respective external addresses maps to that set. The control logic may comprise allocation logic for associating a plurality of ways uniquely with each set, the plurality of ways representing respective plural locations for storing data mapped to that set. In the unified mode, the control logic may assign a first plurality of ways to each set to define a single cache region.
    Type: Grant
    Filed: July 30, 2001
    Date of Patent: February 4, 2003
    Assignee: LSI Logic Corporation
    Inventor: Stefan Auracher
  • Patent number: 6510490
    Abstract: User data transmitted from the host side is first stored in write cache regions of an SDRAM 12 on the basis of an error correction process. When executing an ECC•EDC encode process of adding redundancy data such as an error correction code to the stored user data on the basis of the error correction processing, an encode region of SDRAM 12 is used. The data subjected to the ECC•EDC encode process is sequentially read out from encode region to be modulated and then written onto a disk.
    Type: Grant
    Filed: March 15, 2001
    Date of Patent: January 21, 2003
    Assignee: Sanyo Electric Co., Ltd.
    Inventors: Masato Fuma, Miyuki Okamoto
  • Patent number: 6499085
    Abstract: A system is described for servicing a full cache line in response to a partial cache line request. The system includes a storage to store at least one cache line, a hit/miss detector, and a data mover. The hit/miss detector receives a partial cache line read request from a requesting agent and dispatches a fetch request to a memory device to fetch a full cache line data that contains data requested in the partial cache line read request from the requesting agent. The data mover loads the storage with the full cache line data returned from the memory device and forwards a portion of the full cache line data requested by the requesting agent. If data specified in a subsequent partial cache line request from the requesting agent is contained within the full cache line data specified in the previously dispatched fetch request, the hit/miss detector will send a command to the data mover to forward another portion of the full cache line data stored in the storage to the requesting agent.
    Type: Grant
    Filed: December 29, 2000
    Date of Patent: December 24, 2002
    Assignee: Intel Corporation
    Inventors: Zohar Bogin, David J. Harriman, Zdzislaw A. Wirkus, Satish Acharya
  • Patent number: 6493800
    Abstract: A cache memory shared among a plurality of separate, disjoint entities each having a disjoint address space, includes a cache segregator for dynamically segregating a storage space allocated to each entity of the entities such that no interference occurs with respective ones of the entities. A multiprocessor system including the cache memory, a method and a signal bearing medium for storing a program embodying the method also are provided.
    Type: Grant
    Filed: March 31, 1999
    Date of Patent: December 10, 2002
    Assignee: International Business Machines Corporation
    Inventor: Matthias Augustin Blumrich
  • Publication number: 20020174301
    Abstract: A system and method of logically partitioning shared memory structures between computer domains is disclosed. In one embodiment, each domain is assigned a unique address space identifier. The unique address space identifier preferably has tag extension and index extension bits. This permits the tag and index bits of a conventional local domain address to be extended with tag extension and index extension bits. Data entries in the shared memory structure may be accessed using an extended index value. Hits may be determined using an extended tag value.
    Type: Application
    Filed: May 17, 2001
    Publication date: November 21, 2002
    Inventors: Patrick N. Conway, Kazunori Masuyama, Takeshi Shimizu, Toshio Ogawa, Martin Sodos, Sudheer Kumar Rao Miryala, Jeremy Farrell
  • Patent number: 6480936
    Abstract: When a write access is received from an upper apparatus, a cache control unit develops write data into a data buffer area in a memory, notifies the upper apparatus of a normal end, and thereafter, writes the write data developed into the data buffer area onto a storing medium. An access kind discriminating unit analyzes whether the write access from the host is a sequential access or a random access. A buffer construction control unit selects a data buffer construction of the optimum number of sections in accordance with an access kind and executes the caching operation.
    Type: Grant
    Filed: November 20, 1998
    Date of Patent: November 12, 2002
    Assignee: Fujitsu Limited
    Inventors: Akira Ban, Hiroshi Ichii
  • Publication number: 20020156979
    Abstract: A system, computer program product and method for reallocating memory space for storing a partitioned cache. A server may be configured to receive requests to access a particular logical drive. One or more logical drives may be coupled to an adapter. A plurality of adapters may be coupled to the server. Each logical drive may be associated with one or more stacks where each stack may comprise one or more cache entries for storing information. The one or more stacks associated with a logical drive may be logically grouped into a logically grouped stack associated with that logical drive. Each of the logically grouped stacks of the one or more logical drives coupled to an adapter may be logically grouped into a logically grouped stack associated with that adapter. By logically grouping stacks, memory supporting a partitioned cache may adaptively be reallocated in response to multiple criteria thereby improving the performance of the cache.
    Type: Application
    Filed: November 7, 2001
    Publication date: October 24, 2002
    Applicant: International Business Machines Corporation
    Inventor: Jorge R. Rodriguez
  • Patent number: 6470414
    Abstract: A bank selector circuit for a simultaneous operation flash memory device with a flexible bank partition architecture comprises a memory boundary option, a bank selector encoder coupled to receive a memory partition indicator signal from the memory boundary option, and a bank selector decoder coupled to receive a bank selector code from the bank selector encoder. The decoder, upon receiving a memory address, outputs a bank selector output signal to point the memory address to either a lower memory bank or an upper memory bank in the simultaneous operation flash memory device, in dependence upon the selected memory partition boundary.
    Type: Grant
    Filed: June 26, 2001
    Date of Patent: October 22, 2002
    Assignees: Advanced Micro Devices, Inc., Fujitsu Limited
    Inventors: Tiao-Hua Kuo, Yasushi Kasa, Nancy Leong, Johnny Chen, Michael Van Buskirk
  • Patent number: 6470360
    Abstract: A database system providing a methodology for optimized page allocation is described. During page allocation in the system, once an allocation page with free space has been located in the system's global allocation map or GAM (i.e., using routine page allocation steps), the page identifier for that allocation page is stored in a hint array, as part of that object's (i.e., table's) object descriptor or des. For a table undergoing a lot of splits (i.e., insert-intensive object), the system may store an array of allocation page “hints” (allocation page identifiers) in the des for that object (e.g., table). The array itself comprises a cache of slots (e.g., eight slots), each of which stores an allocation page identifier (“hint”) obtained from the GAM (from a GAM traversal occurring during the page allocation process) or is empty (i.e., has not been filled from the GAM and is therefore set to the initial value of null).
    Type: Grant
    Filed: December 20, 1999
    Date of Patent: October 22, 2002
    Assignee: Sybase, Inc.
    Inventor: Girish Vaitheeswaran
  • Patent number: 6470422
    Abstract: A system includes multiple program execution entities (e.g., tasks, processes, threads, and the like) and a cache memory having multiple sections. An identifier is assigned to each execution entity. An instruction of one of the execution entities is retrieved and an associated identifier is decoded. Information associated with the instruction is stored in one of the cache sections based on the identifier.
    Type: Grant
    Filed: November 9, 2001
    Date of Patent: October 22, 2002
    Assignee: Intel Corporation
    Inventors: Zhong-ning Cai, Tosaku Nakanishi
  • Patent number: 6470442
    Abstract: A processor includes execution resources, data storage, and an instruction sequencing unit, coupled to the execution resources and the data storage, that supplies instructions within the data storage to the execution resources. At least one of the execution resources, the data storage, and the instruction sequencing unit is implemented with a plurality of hardware partitions of like function for processing data. The data processed by each hardware partition is assigned according to a selectable hash of addresses associated as with the data. In a preferred embodiment, the selectable hash can be altered dynamically during the operation of the processor, for example, in response to detection of an error or a load imbalance between the hardware partitions.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: October 22, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steve Dodson, Guy Lynn Guthrie, Jerry Don Lewis
  • Patent number: 6470423
    Abstract: Described herein are approaches for partitioning a buffer cache for dynamically selecting buffers in the buffer cache to store data items, such as data blocks in a DBMS. The selection is based on data access and/or usage patterns. A buffer cache includes multiple buffer pools. A buffer pool is selected from among the multiple buffer pools to store a data item. The selection of a buffer pool is based on various factors, including the likelihood that storing the data item will produce future cache hits, and properties of buffer pools that vary between the buffer pools. Properties of a buffer pool include not only how the buffer pools are organized, both logically and physically, but also how the buffer pool is managed. Examples of a buffer pool property include buffer pool size, size of a buffer in the buffer pool, and the replacement strategy used for a buffer pool (e.g. LRU).
    Type: Grant
    Filed: December 21, 2001
    Date of Patent: October 22, 2002
    Assignee: Oracle Corporation
    Inventors: Alexander C. Ho, Ashok Joshi, Gianfranco Putzolu, Juan R. Loaiza, Graham Wood, William H. Bridge, Jr.
  • Patent number: 6467028
    Abstract: The present invention discloses a method and apparatus for viewing and modifying the cache when accessing and processing audio file data from a server. By modifying the cache during transmission of the audio file data such that the cache is never completely depleted of the data, superior sound quality is achieved and without significant gaps in transmission.
    Type: Grant
    Filed: September 7, 1999
    Date of Patent: October 15, 2002
    Assignee: International Business Machines Corporation
    Inventor: Edward E. Kelley
  • Patent number: 6457100
    Abstract: A novel structure for a highly-scaleable high-performance shared-memory computer system having simplified manufacturability. The computer system contains a repetition of system cells, in which each cell is comprised of a processor chip and a memory subset (having memory chips such as DRAMs or SRAMs) connected to the processor chip by a local memory bus. A unique type of intra-nodal busing connects each system cell in each node to each other cell in the same node. The memory subsets in the different cells need not have equal sizes, and the different nodes need not have the same number of cells. Each node has a nodal cache, a nodal directory and nodal electronic switches to manage all transfers and data coherence among all cells in the same node and in different nodes. The collection of all memory subsets in the computer system comprises the system shared memory, in which data stored in any memory subset is accessible to the processors on the other processor chips in the system.
    Type: Grant
    Filed: September 15, 1999
    Date of Patent: September 24, 2002
    Assignee: International Business Machines Corporation
    Inventors: Michael Ignatowski, Thomas James Heller, Jr., Gottfried Andreas Goldiran
  • Patent number: 6457102
    Abstract: Storing data in a cache memory includes providing a first mechanism for allowing exclusive access to a first portion of the cache memory and providing a second mechanism for allowing exclusive access to a second portion of the cache memory, where exclusive access to the first portion is independent of exclusive access to the second portion. The first and second mechanisms may be software locks. Allowing exclusive access may also include providing a first data structure in the first portion of the cache memory and providing a second data structure in the second portion of the cache memory, where accessing the first portion includes accessing the first data structure and accessing the second portion includes accessing the second data structure. The data structures may doubly linked ring lists of blocks of data and the blocks may correspond to a track on a disk drive. The technique described herein may be generalized to any number of portions.
    Type: Grant
    Filed: November 5, 1999
    Date of Patent: September 24, 2002
    Assignee: EMC Corporation
    Inventors: Daniel Lambright, Adi Ofer, Natan Vishlitzky, Yuval Ofek
  • Patent number: 6453385
    Abstract: A cache system and method of operating are described in which a cache is connected between a processor and a main memory of a computer. The cache system includes a cache memory having a set of cache partitions. Each cache partition has a plurality of addressable storage locations for holding items fetched from said main memory for use by the processor. The cache system also includes a cache refill mechanism arranged to fetch an item from the main memory and to load said item into the cache memory at one of said addressable storage locations in a cache partion wich depends on the address of said item in the main memory. This is achieved by a cache partition access table holding in association with addresses of items to be cached respective multi-bit partition indications identifying one or more cache partition into which the item is to be loaded.
    Type: Grant
    Filed: January 27, 1998
    Date of Patent: September 17, 2002
    Assignee: SGS-Thomson Microelectronics Limited
    Inventors: Andrew Craig Sturges, David May, Glenn Farrall, Bruno Fel, Catherine Barnaby
  • Patent number: 6449692
    Abstract: A computer system (8) comprising a central processing unit (12) and a memory hierarchy. The memory hierarchy comprises a first cache memory (16) and a second cache memory (26). The first cache memory is operable to store non-pixel-information, wherein the non-pixel information is accessible for processing by the central processing unit. The second cache memory is higher in the memory hierarchy than the first cache memory, and has a number of storage locations operable to store non-pixel information (26b) and pixel data (26a). Lastly, the computer system comprises cache control circuitry (24) for dynamically apportioning the number of storage locations such that a first group of the storage locations are for storing non-pixel information and such that a second group of the storage locations are for storing pixel data.
    Type: Grant
    Filed: December 15, 1998
    Date of Patent: September 10, 2002
    Assignee: Texas Instruments Incorporated
    Inventors: Steven D. Krueger, Jonathan H. Shiell, Ian Chen
  • Patent number: 6449691
    Abstract: A processor includes at least one execution unit, an instruction sequencing unit coupled to the execution unit, and a plurality of caches at a same level. The caches, which store data utilized by the execution unit, have diverse cache hardware and each preferably store only data having associated addresses within a respective one of a plurality of subsets of an address space. The diverse cache hardware can include, for example, differing cache sizes, differing associativities, differing sectoring, and differing inclusivities.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: September 10, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steve Dodson, Guy Lynn Guthrie, Jerry Don Lewis
  • Patent number: 6446165
    Abstract: A processor includes at least one execution unit, an instruction sequencing unit coupled to the execution unit, and a plurality of caches at a same level. The caches, which store data utilized by the execution unit, each store only data having associated addresses within a respective one of a plurality of subsets of an address space and implement diverse caching behaviors. The diverse caching behaviors can include differing memory update policies, differing coherence protocols, differing prefetch behaviors, and differing cache line replacement policies.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: September 3, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steve Dodson, Guy Lynn Guthrie, Jerry Don Lewis
  • Publication number: 20020116576
    Abstract: A system and method for cache sharing. The system is a microprocessor comprising a processor core and a graphics engine, each coupled to a cache memory. The microprocessor also includes a driver to direct how the cache memory is shared by the processor core and the graphics engine. The method comprises receiving a memory request from a graphics application program and determining whether a cache memory that may be shared between a processor core and a cache memory is available to be shared. If the cache memory is available to be shared, a first portion of the cache memory is allocated to the processor core and a second portion of the cache memory is allocated to the graphics engine. The method and microprocessor may be included in a computing device.
    Type: Application
    Filed: December 27, 2000
    Publication date: August 22, 2002
    Inventors: Jagannath Keshava, Vladimir Pentkovski, Subramaniam Maiyuran, Salvador Palanca, Hsin-Chu Tsai
  • Patent number: 6434671
    Abstract: A method and apparatus for controlling compartmentalization of a cache memory. A cache memory including a plurality of storage components receives one or more externally generated cache compartment signals. Based on the one or more cache compartment signals, cache compartment logic in the cache memory selects one of the plurality of storage compartments to store data after a cache miss.
    Type: Grant
    Filed: September 30, 1997
    Date of Patent: August 13, 2002
    Assignee: Intel Corporation
    Inventor: Shine Chung
  • Patent number: 6434670
    Abstract: A method and apparatus for efficiently managing caches with non-power-of-two congruence classes allows for increasing the number of congruence classes in a cache when not enough area is available to double the cache size. One or more congruence classes within the cache have their associative sets split so that a number of congruence classes are created with reduced associativity. The management method and apparatus allow access to the congruence classes without introducing any additional cycles of delay or complex logic.
    Type: Grant
    Filed: November 9, 1999
    Date of Patent: August 13, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steven Dodson, Guy Lynn Guthrie
  • Patent number: 6434669
    Abstract: A set associative cache includes a cache controller, a directory, and an array including at least one congruence class containing a plurality of sets. The plurality of sets are partitioned into multiple groups according to which of a plurality of information types each set can store. The sets are partitioned so that at least two of the groups include the same set and at least one of the sets can store fewer than all of the information types. To optimize cache operation, the cache controller dynamically modifies a cache policy of a first group while retaining a cache policy of a second group, thus permitting the operation of the cache to be individually optimized for different information types. The dynamic modification of cache policy can be performed in response to either a hardware-generated or software-generated input.
    Type: Grant
    Filed: September 7, 1999
    Date of Patent: August 13, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Lakshminarayana Baba Arimilli, James Stephen Fields, Jr.
  • Patent number: 6434668
    Abstract: A set associative cache includes a number of congruence classes that each contain a plurality of sets, a directory, and a cache controller. The directory indicates, for each congruence class, which of a plurality of information types each of the plurality of sets can store. At least one set in at least one of the congruence classes is restricted to storing fewer than all of the information types and at least one set can store multiple information types. When the cache receives information to be stored of a particular information type, the cache controller stores the information into one of the plurality of sets indicated by the directory as capable of storing that particular information type. By managing the sets in which sets information is stored according to information type, an awareness of the characteristics of the various information types can easily be incorporated into the cache's allocation and victim selection policies.
    Type: Grant
    Filed: September 7, 1999
    Date of Patent: August 13, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Lakshminarayana Baba Arimilli, James Stephen Fields, Jr.
  • Patent number: 6430656
    Abstract: A cache memory provides a mechanism for storing and retrieving values wherein a hardware mechanism such as a partial address field selector is combined with an software generated selector in order to access specific congruence classes within a cache. Assignment of software generated selectors to specific types of data can be made in order to allow an operating system or application to efficiently manage cache usage.
    Type: Grant
    Filed: November 9, 1999
    Date of Patent: August 6, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Bryan Ronald Hunt, William John Starke
  • Patent number: 6430660
    Abstract: A disk controller system includes a microprocessor, a hard disk controller, a disk channel path, a host communications path, and an interface coupled to each of the microprocessor, hard disk controller, disk channel path and host communications path. A unified non-volatile memory is coupled to the interface that has a plurality of memory spaces. A memory space is allocated for each of the microprocessor, hard disk controller, disk channel path and host communications path. Each memory space is separated from another memory space by a programmable memory space boundary. The microprocessor, hard disk controller and the unified memory are all fabricated on a single substrate.
    Type: Grant
    Filed: May 21, 1999
    Date of Patent: August 6, 2002
    Assignee: International Business Machines Corporation
    Inventors: Timothy Michael Kemp, John Davis Palmer, Roy Edwin Scheuerlein
  • Patent number: 6427190
    Abstract: A virtual memory system including a local-to-global virtual address translator for translating local virtual addresses having associated task specific address spaces into global virtual addresses corresponding to an address space associated with multiple tasks, and a global virtual-to-physical address translator for translating global virtual addresses to physical addresses. Protection information is provided by each of the local virtual-to-global virtual address translator, the global virtual-to-physical address translator, the cache tag storage, or a protection information buffer depending on whether a cache hit or miss occurs during a given data or instruction access. The cache is configurable such that it can be configured into a buffer portion or a cache portion for faster cache accesses.
    Type: Grant
    Filed: May 12, 2000
    Date of Patent: July 30, 2002
    Assignee: MicroUnity Systems Engineering, Inc.
    Inventor: Craig C. Hansen
  • Patent number: 6421761
    Abstract: A partitioned cache and management method for selectively caching data by type improves the efficiency of a cache memory by partitioning congruence class sets for storage of particular data types such as operating system routines and data used by those routines. By placing values for associated applications into different partitions in the cache, values can be kept simultaneously available in cache with no interference that would cause deallocation of some values in favor of newly loaded values. Additionally, placing data from unrelated applications in the same partition can be performed to allow the cache to rollover values that are not needed simultaneously.
    Type: Grant
    Filed: November 9, 1999
    Date of Patent: July 16, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Bryan Ronald Hunt, William John Starke
  • Publication number: 20020083269
    Abstract: A cache system and method of operating are described in which a cache is connected between a processor and a main memory of a computer. The cache system includes a cache memory having a set of cache partitions. Each cache partition has a plurality of addressable storage locations for holding items fetched from said main memory for use by the processor. The cache system also includes a cache refill mechanism arranged to fetch an item from the main memory and to load said item into the cache memory at one of said addressable storage locations in a cache partion wich depends on the address of said item in the main memory. This is achieved by a cache partition access table holding in association with addresses of items to be cached respective multi-bit partition indications identifying one or more cache partition into which the item is to be loaded.
    Type: Application
    Filed: January 27, 1998
    Publication date: June 27, 2002
    Inventors: ANDREW CRAIG STURGES, DAVID MAY
  • Patent number: 6408357
    Abstract: A disk drive having a cache memory and a method of operating same, in which the disk drive is connectable to a host computer for reading and writing data on a disk. The method defines a length parameter specifying the length of a data segment to be written to the disk. A first portion of the cache memory stores data segments whose length is equal to the length parameter and a second portion of the cache memory stories data segments whose length is not equal to the length parameter. When a host write command including a write data segment having a write command length is received, the write data segment is storied in the first portion if the write command length is equal to the length parameter and stored in the second portion if the write command length is not equal to the length parameter. Write data segments stored in the first portion may be overwritten and the writing thereof to disk may be delayed according to a predetermined delayed writing policy.
    Type: Grant
    Filed: January 15, 1999
    Date of Patent: June 18, 2002
    Assignee: Western Digital Technologies, Inc.
    Inventors: Jonathan Lee Hanmann, Marcus C. Kellerman
  • Patent number: 6408361
    Abstract: The present invention provides a method and apparatus for allowing autonomous, way specific tag updates. More specifically, the invention provides way specific tag and status updates while concurrently allowing reads of the ways not currently being updated. If a read hit is determined, then the read is processed in a typical fashion. However, if the read is a read miss and one of the ways is flagged as being updated, then all ways are read again once the specific way has completed its updated.
    Type: Grant
    Filed: September 2, 1999
    Date of Patent: June 18, 2002
    Assignees: International Business Machines Corporation, Motorola, Inc.
    Inventors: Thomas Albert Petersen, James Nolan Hardage, Jr., Scott Ives Remington
  • Patent number: 6401168
    Abstract: A mass data storage device (10) and method for operating it are disclosed. The mass data storage device has a rotating disk memory (14) which has a number of sectors for containing data. A FIFO memory (30) has three memory sections (40-42), each for containing an entire sector of data associated with respective sectors of the rotating disk memory. An ECC unit (34) has random access to any data contained in the FIFO memory (30). The ECC unit (34) is operated to perform error correction on data while the data is contained in the FIFO memory (34).
    Type: Grant
    Filed: January 4, 1999
    Date of Patent: June 4, 2002
    Assignee: Texas Instruments Incorporated
    Inventors: John W. Williams, Michael James
  • Publication number: 20020062424
    Abstract: A microprocessor including a control unit and a cache connected with the control unit for storing data to be used by the control, wherein the cache is selectively configurable as either a single cache or as a partitioned cache having a locked cache portion and a normal cache portion. The normal cache portion is controlled by a hardware implemented automatic replacement process. The locked cache portion is locked so that the automatic replacement process cannot modify the contents of the locked cache. An instruction is provided in the instruction set that enables software to selectively allocate lines in the locked cache portion to correspond to locations in an external memory, thereby enabling the locked cache portion to be completely managed by software.
    Type: Application
    Filed: August 1, 2001
    Publication date: May 23, 2002
    Applicant: Nintendo Co., Ltd.
    Inventors: Yu-Chung C. Liao, Peter A. Sandon, Howard Cheng, Peter Hsu
  • Patent number: 6393525
    Abstract: An LRU with protection method is provided that offers substantial performance benefits over traditional LRU replacement methods by providing solutions to common problems with traditional LRU replacement. By dividing a cache entry list into a filter sublist and a reuse list, population and protection processes can be implemented to reduce associativity and capacity displacement. New cache entries are initially stored in the filter list, and the reuse list is populated with entries promoted from the cache list. Eviction from the filter list and reuse list is done by a protection process that evicts a data entry from the filter, reuse, or global cache list. Many variations of protection and eviction processes are discussed herein, along with the benefits each provides in reducing the effect of unwanted displacement problems present in traditional LRU replacement.
    Type: Grant
    Filed: May 18, 1999
    Date of Patent: May 21, 2002
    Assignee: Intel Corporation
    Inventors: Christopher B. Wilkerson, Nicholas D. Wade
  • Patent number: 6389513
    Abstract: A buffer cache management structure, or metadata, for a computer system such as a NUMA (non-uniform memory access) machine, wherein physical main memory is distributed and shared among separate memories. The memories reside on separate nodes that are connected by a system interconnect. The buffer cache metadata is partitioned into portions that each include a set of one or more management data structures such as hash queues that keep track of disk blocks cached in the buffer cache. Each set of management data structures is stored entirely within one memory. A first process performs operations on the buffer cache metadata by determining, from an attribute of a data block requested by the process, in which memory a portion of the metadata associated with the data block is stored. The process then determines if the memory containing the metadata portion is local to the process. If so, the first process performs the operation.
    Type: Grant
    Filed: May 13, 1998
    Date of Patent: May 14, 2002
    Assignee: International Business Machines Corporation
    Inventor: Kevin A. Closson
  • Patent number: 6381676
    Abstract: A method and apparatus which provides a cache management policy for use with a cache memory for a multi-threaded processor. The cache memory is partitioned among a set of threads of the multi-threaded processor. When a cache miss occurs, a replacement line is selected in a partition of the cache memory which is allocated to the particular thread from which the access causing the cache miss originated, thereby preventing pollution to partitions belonging to other threads.
    Type: Grant
    Filed: December 7, 2000
    Date of Patent: April 30, 2002
    Assignee: Hewlett-Packard Company
    Inventors: Robert Aglietti, Rajiv Gupta
  • Patent number: 6380873
    Abstract: A method for reducing radio frequency interference from a high frequency serial bus by scrambling data signals and reducing the repetition of control signals. Beginning and ending control signals are provided with meaningless signals provided therebetween.
    Type: Grant
    Filed: June 30, 2000
    Date of Patent: April 30, 2002
    Assignee: Quantum Corporation
    Inventors: Anthony L. Priborsky, Knut S. Grimsrud, John Brooks
  • Patent number: 6370620
    Abstract: A plurality of web objects are cached. A first object is within an assigned web partition. A second object is outside of the assigned web partition. The first object is placed in a first amount of space within the cache. A copy of the second object is placed in a second amount of space within the cache. The first amount of space includes and is larger than the second amount of space.
    Type: Grant
    Filed: January 28, 1999
    Date of Patent: April 9, 2002
    Assignee: International Business Machines Corporation
    Inventors: Kun-Lung Wu, Philip Shi-Lung Yu
  • Patent number: 6370622
    Abstract: Curious caching improves upon cache snooping by allowing a snooping cache to insert data from snooped bus operations that is not currently in the cache and independent of any prior accesses to the associated memory location. In addition, curious caching allows software to specify which data producing bus operations, e.g., reads and writes, result in data being inserted into the cache. This is implemented by specifying “memory regions of curiosity” and insertion and replacement policy actions for those regions. In column caching, the replacement of data can be restricted to particular regions of the cache. By also making the replacement address-dependent, column caching allows different regions of memory to be mapped to different regions of the cache. In a set-associative cache, a replacement policy specifies the particular column(s) of the set-associative cache in which a page of data can be stored.
    Type: Grant
    Filed: November 19, 1999
    Date of Patent: April 9, 2002
    Assignee: Massachusetts Institute of Technology
    Inventors: Derek Chiou, Boon S. Ang
  • Patent number: 6370619
    Abstract: The present invention provides a method and apparatus for partitioning a buffer cache for dynamically mapping data blocks with a particular replacement strategy based on the associated table's access and/or usage patterns. According to the method, a buffer cache in a computer system is managed by dividing the buffer cache into multiple buffer pools. In managing the buffer cache, when a data item is requested, it is first determined whether the requested data item is stored in a buffer within the buffer cache. If the requested data item is not stored in a buffer in the buffer cache, then a particular buffer pool in the buffer cache is dynamically selected for storing the requested data item. Once the particular buffer pool is selected, the requested data item is stored into a buffer in the particular buffer pool.
    Type: Grant
    Filed: June 22, 1998
    Date of Patent: April 9, 2002
    Assignee: Oracle Corporation
    Inventors: Alexander C. Ho, Ashok Joshi, Gianfranco Putzolu, Juan R. Loaiza, Graham Wood, William H. Bridge, Jr.
  • Patent number: 6366994
    Abstract: An apparatus and method for allocating a memory in a cache aware manner are provided. An operating system can be configured to partition a system memory into regions. The operating system can then allocate corresponding portions within each region to various programs that include the operating system and applications. The portions within each region of the system memory can map into designated portions of a cache. The size of a portion of memory allocated for a program can be determined according to the needs of the program.
    Type: Grant
    Filed: June 22, 1999
    Date of Patent: April 2, 2002
    Assignee: Sun Microsystems, Inc.
    Inventor: Sesha Kalyur
  • Patent number: 6367002
    Abstract: An apparatus and a method are distinguished in that an instruction queue is provided which is configured such that when instruction data are written into the instruction queue and/or when instruction data are read out of the instruction queue, a plurality of defined points within the instruction queue are made to start up selectively. As a result, the incidence of pauses in program execution can be reduced to a minimum.
    Type: Grant
    Filed: February 12, 1999
    Date of Patent: April 2, 2002
    Assignee: Siemens Aktiengesellschaft
    Inventor: Jürgen Birkhäuser
  • Patent number: 6363468
    Abstract: Systems and methods consistent with the present invention allocate memory of a memory array by partitioning the memory array into subheaps dedicated to frequently used memory blocks. To this end, the system collects memory statistics on memory usage patterns to determine memory block sizes most often used in the memory array. The system uses these statistics to partition the memory array into a main heap and at least one memory subheap. The system then allocates or deallocate memory of the memory array using the memory subheap. Furthermore, the system allocates memory of the memory subheap only for memory blocks having one of the memory block sizes most often used in the memory array.
    Type: Grant
    Filed: June 22, 1999
    Date of Patent: March 26, 2002
    Assignee: Sun Microsystems, Inc.
    Inventor: David Allison
  • Publication number: 20020035673
    Abstract: A data caching technique is provided that is highly scalable while being synchronous with an underlying persistent data source, such as a database management system. Consistent with the present invention, data is partitioned along appropriate lines, such as by account, so that a data cache stores mostly unique information and receives only the invalidation messages necessary to maintain that data cache.
    Type: Application
    Filed: July 6, 2001
    Publication date: March 21, 2002
    Inventors: James Brian Roseborough, Venkateswarlu Kothapalli, Toshiyuki Matsushima
  • Patent number: 6356996
    Abstract: An apparatus and method for cache fencing allows programmatic control of the access and duration of stay of selected executables within processor cache. In one example, an instruction set implementing a virtual machine may store each instruction in a single cache line as a compiled, linked loaded image. After loading, cache fencing is conducted to prevent the cache from flushing the contents or replacing the contents of any cache line. Typically, in so doing, attributes associated with pages in physical memory are employed. The attributes include an “uncacheable” attribute flag, which is set for the entire contents of physical memory except that containing the selected executables which are intended to remain within cache memory. The attributes may also include page sizing attributes which are utilized to define pages that contain interpreter instructions and pages that do not contain interpreter instructions.
    Type: Grant
    Filed: July 17, 1998
    Date of Patent: March 12, 2002
    Assignee: Novell, Inc.
    Inventor: Phillip M. Adams
  • Patent number: 6351802
    Abstract: A method of scheduling instructions in a computer processor. The method comprises fetching instructions to create an in-order instruction buffer, and scheduling instruction from the instruction buffer into instruction slots within instruction vectors in an instruction vector table. Instruction vectors are then dispatched from the instruction vector table to a prescheduled instruction cache, and, in parallel, to an instruction issue unit.
    Type: Grant
    Filed: December 3, 1999
    Date of Patent: February 26, 2002
    Assignee: Intel Corporation
    Inventor: Gad S. Sheaffer
  • Patent number: 6349363
    Abstract: A system includes multiple program execution entities (e.g., tasks, processes, threads, and the like) and a cache memory having multiple sections. An identifier is assigned to each execution entity. An instruction of one of the execution entities is retrieved and an associated identifier is decoded. Information associated with the instruction is stored in one of the cache sections based on the identifier.
    Type: Grant
    Filed: December 8, 1998
    Date of Patent: February 19, 2002
    Assignee: Intel Corporation
    Inventors: Zhong-ning Cai, Tosaku Nakanishi
  • Patent number: 6349364
    Abstract: The present invention provides for setting the block-size suitably in each address space in order to deal with the difference of the scope within the spatial locality in the address space, and to suppress the generating of the unnecessary replacing.
    Type: Grant
    Filed: March 15, 1999
    Date of Patent: February 19, 2002
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Koji Kai, Koji Inoue, Kazuaki Murakami
  • Publication number: 20020019913
    Abstract: A computer system having a memory system where at least some of the memory is designated as shared memory. A transaction-based bus mechanism couples to the memory system and includes a cache coherency transaction defined within its transaction set. A processor having a cache memory is coupled to the memory system through the transaction based bus mechanism. A system component coupled to the bus mechanism includes logic for specifying cache coherency policy. Logic within the system component initiates a cache transaction according to the specified cache policy on the bus mechanism. Logic within the processor responds to the initiated cache transaction by executing a cache operation specified by the cache transaction.
    Type: Application
    Filed: October 1, 1999
    Publication date: February 14, 2002
    Inventors: D. SHIMIZU, ANDREW JONES
  • Patent number: 6347358
    Abstract: The present invention discloses a disk control unit which improves the use of a cache in a disk unit to increase concurrent access speeds. The disk control unit comprises a plurality of directors each independently controlling an I/O operation between a plurality of hosts and a disk unit, a cache memory connected to the directors and having a plurality of cache areas provided according to the configuration of the disk unit, and a plurality of cache management areas each provided for each of the cache areas for keeping track of whether or not the cache area is used by any of the directors. In addition, the disk control unit has an exclusive control unit which allows each director to reference the cache management area to place the cache area under exclusive control.
    Type: Grant
    Filed: December 15, 1999
    Date of Patent: February 12, 2002
    Assignee: NEC Corporation
    Inventor: Atsushi Kuwata