Least Recently Used (lru) Patents (Class 711/160)
-
Patent number: 6393525Abstract: An LRU with protection method is provided that offers substantial performance benefits over traditional LRU replacement methods by providing solutions to common problems with traditional LRU replacement. By dividing a cache entry list into a filter sublist and a reuse list, population and protection processes can be implemented to reduce associativity and capacity displacement. New cache entries are initially stored in the filter list, and the reuse list is populated with entries promoted from the cache list. Eviction from the filter list and reuse list is done by a protection process that evicts a data entry from the filter, reuse, or global cache list. Many variations of protection and eviction processes are discussed herein, along with the benefits each provides in reducing the effect of unwanted displacement problems present in traditional LRU replacement.Type: GrantFiled: May 18, 1999Date of Patent: May 21, 2002Assignee: Intel CorporationInventors: Christopher B. Wilkerson, Nicholas D. Wade
-
Patent number: 6389514Abstract: The present invention provides a method and an apparatus for addressing a main memory unit in a computer system which results in improved page hit rate and reduced memory latency by only keeping open some recently used pages and speculatively closing the rest of the pages in the main memory unit. In a computer system with 64 banks and 2 CPUs, only 8 banks may be kept open and the remaining 56 banks are kept closed. Keeping only 8 banks open will not reduce the page hit frequency significantly, but will allow most accesses that are not page hits to access banks that are already closed, so that they are not slowed down by open banks. Thus, the page hit rate is increased and the miss rate is reduced.Type: GrantFiled: March 25, 1999Date of Patent: May 14, 2002Assignee: Hewlett-Packard CompanyInventor: Tomas G. Rokicki
-
Patent number: 6381677Abstract: Disclosed is a system for caching data. After determining a sequential access of a first memory area, such as a direct access storage device (DASD), a processing unit stages a group of data sets from the first memory area to a second memory, such as cache. The processing unit processes a data access request (DAR) for data sets in the first memory area that are included in the sequential access and reads the requested data sets from the second memory area. The processing unit determines trigger data set from a plurality of trigger data sets based on a trigger data set criteria. The processing unit then stages a next group of data sets from the first memory area to the second memory area in response to reading the determined trigger data set.Type: GrantFiled: August 19, 1998Date of Patent: April 30, 2002Assignee: International Business Machines CorporationInventors: Brent Cameron Beardsley, Michael Thomas Benhase, Joseph Smith Hyde, Thomas Charles Jarvis, Douglas A. Martin, Robert Louis Morton
-
Patent number: 6378056Abstract: A method and apparatus for configuring memory devices. A disclosed bus controller includes a storage location and a control circuit. The control circuit is coupled to perform an initialization operation when a value indicating that initialization operation is stored in the storage location. The initialization operation is selected from one of a set of initialization operations that the control circuit is capable of performing.Type: GrantFiled: November 3, 1998Date of Patent: April 23, 2002Assignee: Intel CorporationInventors: Puthiya K. Nizar, William A. Stevens
-
Patent number: 6374258Abstract: In a data recording and reproducing apparatus (10), according to a release instruction from a server controller (30) added to a PLAY_OPEN command for requesting preparation of reproduction of arbitrary data, management information is changed so that an area occupied by reproduced data of the data in the on-producing file becomes an area where new data can be recorded anytime while the file is reproduced. As a result, for example, in the case where plural data are recorded, before the reproduction of the on-reproducing file is completed, the data recording area occupied by the reproduced data is changed into an area where new data can be recorded by efficiently utilizing recording areas in a data accumulation section (13) if on-producing file exists.Type: GrantFiled: November 13, 1998Date of Patent: April 16, 2002Assignee: Sony CorporationInventors: Hiroyuki Fujita, Norikazu Ito, Satoshi Yoneya, Masakazu Yoshimoto, Satoshi Katsuo, Satoshi Yutani, Tomohisa Shiga, Jun Yoshikawa, Koichi Sato
-
Patent number: 6370631Abstract: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance. The memory controller (IMC) of the present invention preferably sits on the main CPU bus or a high speed system peripheral bus such as the PCI bus and couples to system memory. The IMC preferably uses a lossless data compression and decompression scheme. Data transfers to and from the integrated memory controller of the present invention can thus be in either two formats, these being compressed or normal (non-compressed). The IMC also preferably includes microcode for specific decompression of particular data formats such as digital video and digital audio. Compressed data from system I/O peripherals such as the hard drive, floppy drive, or local area network (LAN) are decompressed in the IMC and stored into system memory or saved in the system memory in compressed format.Type: GrantFiled: February 1, 1999Date of Patent: April 9, 2002Assignee: Interactive Silicon, Inc.Inventor: Thomas A. Dye
-
Patent number: 6363455Abstract: To be able to maintain compatibility of a plurality of data even when a hazard is caused in the midst of writing the plurality of related data, a first region stores physical block numbers #00H through #03H of physical blocks constituting a data region respectively in correspondence with logical block numbers %00H through %03H, for example, when a request of writing data is made to the logical blocks %00H and %02H, the data are respectively written to, for example, physical blocks #04H and #05H which are updating blocks for updating data in physical blocks constituting the data region and block numbers #04H and #05H of the physical blocks are stored to a second region respectively in correspondence with the logical blocks %00H and %02H.Type: GrantFiled: February 23, 1998Date of Patent: March 26, 2002Assignee: Sony CorporationInventors: Susumu Kusakabe, Masayuki Takada
-
Patent number: 6360296Abstract: File control apparatus adapted to be operatively connected to a host device for controlling a plurality of memory devices includes a cache memory for storing data blocks sent to or retrieved from the at least one memory device by the host. A control unit controls data transfer between the memory devices and the cache memory. A continuous data information generating device is included for generating continuous data information indicating whether the data blocks from the same memory device are updated continuously The control unit stores the continuously updated data blocks from the same memory device back to that same memory device as a single data block in accordance with a least recently used (LRU) scheme.Type: GrantFiled: March 31, 2000Date of Patent: March 19, 2002Assignee: Fujitsu LimitedInventors: Hiromi Kubota, Hideaki Omura
-
Patent number: 6360300Abstract: A system and method for organizing compressed data and uncompressed data in a storage system. The method and system include a compressor for compressing a data block into a compressed data block, wherein N represents a compression ratio. The storage disk includes a first disk partition having N slots for storing compressed data, and a second disk partition for storing uncompressed data. A portion of the N slots in the first partition include address pointers for pointing to locations in the second disk partition containing the uncompressed data.Type: GrantFiled: August 31, 1999Date of Patent: March 19, 2002Assignee: International Business Machines CorporationInventors: Brian Jeffrey Corcoran, Shanker Singh
-
Patent number: 6356914Abstract: The set up information associated with at least some of a DVD disc's titles are stored in a DVD player's local memory. Items are chosen for storage based upon the likelihood that a title will be played. The likelihood that a title will be played is balanced against the availability of local memory for storing this information. Titles are ranked according to the likelihood they might be played and titles of lower rank may be purged from the local memory, or title cache, set aside for this task. Six basic criteria are used to rank a title as extremely likely, highly likely, likely, or not likely to be played. A title ranked extremely likely to be played has top caching priority, one that is highly likely to be played has the second highest caching priority, and so on. Each time a title's set up information is read, the title is ranked for caching. Additionally, the state of the title cache is stored every time a user plays a DVD.Type: GrantFiled: May 8, 2000Date of Patent: March 12, 2002Assignee: Oak Technology, Inc.Inventors: Linden A. deCarmo, Amir M. Mobini
-
Patent number: 6349372Abstract: System and method for reducing data access latency for cache miss operations in a computer system implementing main memory compression in which the unit of compression is a memory segment. The method includes steps of providing common memory area in main memory for storing compressed and uncompressed data segments; accessing directory structure formed in the main memory having entries for locating both uncompressed data segments and compressed data segments for cache miss operations, each directory entry including index for locating data segments in the main memory and further indicating status of the data segment; and, checking a status indication of a data segment to be accessed for a cache miss operation, and processing either a compressed or uncompressed data segment from the common memory area according to the status.Type: GrantFiled: May 19, 1999Date of Patent: February 19, 2002Assignee: International Business Machines CorporationInventors: Caroline D. Benveniste, Peter A. Franaszek, John T. Robinson, Charles O. Schulz
-
Publication number: 20020013887Abstract: A data buffer memory management method and system is provided for increasing the effectiveness and efficiency of buffer replacement selection. Hierarchical Victim Selection (HVS) identifies hot buffer pages, warm buffer pages and cold buffer pages through weights, reference counts, reassignment of levels and ageing of levels, and then explicitly avoids victimizing hot pages while favoring cold pages in the hierarchy. Unlike LRU, pages in the system are identified by both a static manner (through weights) and in a dynamic manner (through reference counts, reassignment of levels and ageing of levels). HVS provides higher concurrency by allowing pages to be victimized from different levels simultaneously. Unlike other approaches, Hierarchical Victim Selection provides the infrastructure for page cleaners to ensure that the next candidate victims will be clean pages by segregating dirty pages in hierarchical levels having multiple separate lists so that the dirty pages may be cleaned asynchronously.Type: ApplicationFiled: May 8, 2001Publication date: January 31, 2002Applicant: International Business Machines CorporationInventor: Edison L. Ting
-
Publication number: 20020007433Abstract: To be able to maintain compatibility of a plurality of data even when a hazard is caused in the midst of writing the plurality of related data, a first region stores physical block numbers #00H through #03H of physical blocks constituting a data region respectively in correspondence with logical block numbers %00H through %03H, for example, when a request of writing data is made to the logical blocks %00H and %02H, the data are respectively written to, for example, physical blocks #04H and #05H which are updating blocks for updating data in physical blocks constituting the data region and block numbers #04H and #05H of the physical blocks are stored to a second region respectively in correspondence with the logical blocks %00H and %02H.Type: ApplicationFiled: February 23, 1998Publication date: January 17, 2002Inventors: SUSUMU KUSAKABE, MASAYUKI TAKADA
-
Patent number: 6338115Abstract: A low complexity approach to DASD cache management. Large, fixed-size bands of data from the DASD, rather than variable size records or tracks, are managed, resulting in reduced memory consumption. Statistics are collected for bands of data, as well as conventional LRU information, in order to improve upon the performance of a simple LRU replacement scheme. The statistics take the form of a single counter which is credited (increased) for each read to a band and penalized (reduced) for each write to a band. Statistics and LRU information are also collected for at least half as many nonresident bands as resident bands. In an emulation mode, control information (e.g., statistics and LRU information) regarding potentially cacheable DASD data, is collected even though there is no cache memory installed. When in this mode, the control information permits a real time emulation of performance enhancements that would be achieved were cache memory added to the computer system.Type: GrantFiled: February 16, 1999Date of Patent: January 8, 2002Assignee: International Business Machines CorporationInventors: Robert Edward Galbraith, Carl E. Forhan, Jessica M. Gisi
-
Patent number: 6338120Abstract: An apparatus for encoding/decoding an associative cache set use history, and method therefor, is implemented. A five-bit signal is used to fully encode a four-way cache. A least recently used (LRU) set is encoded using a first bit pair, and a second bit pair encodes a most recently used (MRU) set. The sets having intermediate usage are encoded by a remaining single bit. The single bit has a first predetermined value when the sets having intermediate usage have an in-order relationship in accordance with a predetermined ordering of the cache sets. The single bit has a second predetermined value when the sets having intermediate usage have an out-of-order relationship.Type: GrantFiled: February 26, 1999Date of Patent: January 8, 2002Assignee: International Business Machines CorporationInventor: Brian Patrick Hanley
-
Patent number: 6330633Abstract: This invention relates to an information processing method and apparatus. A memory for storing information in block units, comprises a data region for storing data in block units and a first and a second region for storing plural block numbers which are numbers assigned to blocks in a data region. Data is written to a block of the data region corresponding to a block number stored in one of the first and second regions, the block number of the block to which data was written is stored in the other of the first and second regions, and the data in the one of the first and second regions is erased. In this way, there is less risk of memory corruption, and data can be read stably.Type: GrantFiled: July 8, 1998Date of Patent: December 11, 2001Assignee: Sony CorporationInventors: Susumu Kusakabe, Masayuki Takada
-
Patent number: 6327643Abstract: A cache replacement algorithm improves upon a least recently used algorithm by differentiating between cache lines that have been written with those that have not been written. The replacement algorithm attempts to replace cache lines that have been previously written back to memory, and if there are no written cache lines available, then the algorithm attempts to replace cache lines that are currently on page and on bank.Type: GrantFiled: September 30, 1998Date of Patent: December 4, 2001Assignee: International Business Machines Corp.Inventor: Kenneth William Egan
-
Patent number: 6324549Abstract: A remote access managing means of a module manages each of an object that references an outside object and an object that is referenced from the outside by adding a reference weight to each object. In other words, the remote access managing means stores a reference weight according to the type of communication message in the object information of communication messages for dealing with outside modules. For example, an additional reference weight that is set by a reference weight. managing means is stored in an execution request message to an outside object. A heap memory managing means reclaims memory regions of unnecessary objects in the heap memory regions in accordance with the reference weight that is set through the exchange of this type of messages.Type: GrantFiled: March 3, 1999Date of Patent: November 27, 2001Assignee: NEC CorporationInventors: Hidehito Gomi, Satoru Fujita
-
Patent number: 6321300Abstract: A write buffer unit operates in a cached memory microprocessor system by dynamically reconfigurable timed flushing of a queue of coalescing write buffers in the unit. Each time an additional one of the coalescing write buffers is allocated, a time-out period is generated which is inversely related to the number of allocated write buffers. After one of the allocated write buffers times out by exceeding the time-out period with no write activity to the coalescing write buffer, a controller in the unit determines the least recently written to allocated write buffer, and generates control signals to flush that write buffer.Type: GrantFiled: May 14, 1999Date of Patent: November 20, 2001Assignee: Rise Technology CompanyInventors: Matthew D. Ornes, James Y. Cho
-
Publication number: 20010039605Abstract: Disclosed is a virtual channel memory access controlling circuit for controlling accesses from a plurality of memory masters to a virtual channel memory having a plurality of channels, comprising: a channel information storing portion having a plurality of storage areas, each of the storage areas being assigned to any of the memory masters, each of the storage areas corresponding to each of the channels, each of the storage areas having a channel number and a memory address, the channel number identifying a channel, and the memory address being sent to the virtual channel memory; detector for detecting necessity of a change of assignment of storage area between memory masters; and changer for dynamically changing the assignment of the storage area between memory masters.Type: ApplicationFiled: December 12, 2000Publication date: November 8, 2001Inventor: Takeshi Uematsu
-
Publication number: 20010010066Abstract: A computer system includes an adaptive memory arbiter for prioritizing memory access requests, including a self-adjusting, programmable request-priority ranking system. The memory arbiter adapts during every arbitration cycle, reducing the priority of any request which wins memory arbitration. Thus, a memory request initially holding a low priority ranking may gradually advance in priority until that request wins memory arbitration. Such a scheme prevents lower-priority devices from becoming “memory-starved.” Because some types of memory requests (such as refresh requests and memory reads) inherently require faster memory access than other requests (such as memory writes), the adaptive memory arbiter additionally integrates a nonadjustable priority structure into the adaptive ranking system which guarantees faster service to the most urgent requests.Type: ApplicationFiled: February 15, 2001Publication date: July 26, 2001Inventors: Kenneth T. Chin, C. Kevin Coffee, Michael J. Collins, Jerome J. Johnson, Phillip M. Jones, Robert A. Lester, Gary J. Piccirillo, Jeffrey C. Stevens
-
Patent number: 6256644Abstract: A system is provided for controlling storing databases in nonvolatile storages by a program. The database is composed as a set of records. In the system, a record storing reference table and data storing areas are provided. There is provided in the reference table, record identification data for identifying records and storage area designation data for designating a storage area in the nonvolatile storages. The data storing area is provided in one of the nonvolatile storages for each record. In the area, a storing logic record is stored. A storing control means is provided for newly storing a record in a storage area designated by the storage area designation data in the record storing reference table. A storing address information of the record in the storage area is stored in one of the data storing areas. A record stored in a storage area is read in based on the address information.Type: GrantFiled: May 28, 1998Date of Patent: July 3, 2001Inventor: Koichi Shibayama
-
Patent number: 6237060Abstract: In general, a method and apparatus for managing available cache memory in a browser are disclosed. Any document stored in a cache memory not having associated with it a strong reference is subject to being reclaimed by a garbage collector. The most recently requested documents, however, are stored in the cache memory with strong references associated therewith thereby precluding them from being reclaimed until such time as the strong reference is abolished. The strong reference is abolished when the document identifier associated with the document stored in the cache memory is not present in the document stack. Therefor, only the most recently requested documents remain stored in the cache memory depending upon the depth of the document stack.Type: GrantFiled: April 23, 1999Date of Patent: May 22, 2001Assignee: Sun Microsystems, Inc.Inventors: Matthew F. Shilts, Michael R. Allen
-
Patent number: 6223256Abstract: A cache memory system for a computer. Target entries for the cache memory include a class attribute. The cache may use a different replacement algorithm for each possible class attribute value. The cache may be partitioned into sections based on class attributes. Class attributes may indicate a relative likelihood of future use. Alternatively, class attributes may be used for locking. In one embodiment, each cache section is dedicated to one corresponding class. In alternative embodiments, cache classes are ranked in a hierarchy, and target entries having higher ranked attributes may be entered into cache sections corresponding to lower ranked attributes. With each of the embodiments, entries with a low likelihood of future use or low temporal locality are less likely to flush entries from the cache that have a higher likelihood of future use.Type: GrantFiled: July 22, 1997Date of Patent: April 24, 2001Assignee: Hewlett-Packard CompanyInventor: Blaine D. Gaither
-
Patent number: 6205520Abstract: A processor is disclosed. The processor includes a decoder to decode instructions and a circuit, in response to a decoded instruction, detects an incoming write back or write through streaming store instruction that misses a cache and allocates a buffer in write combining mode. The circuit, in response to a second decoded instruction, detects either an uncacheable speculative write combining store instruction or a second write back streaming store or write through streaming store instruction that hits the buffer and merges the second decoded instruction with the buffer.Type: GrantFiled: March 31, 1998Date of Patent: March 20, 2001Assignee: Intel CorporationInventors: Salvador Palanca, Vladimir Pentkovski, Steve Tsai, Subramaniam Maiyuran
-
Patent number: 6202129Abstract: A method and system for providing cache memory management. The system comprises a main memory, a processor coupled to the main memory, and at least one cache memory coupled to the processor for caching of data. The at least one cache memory has at least two cache ways, each comprising a plurality of sets. Each of the plurality of sets has a bit which indicates whether one of the at least two cache ways contains non-temporal data. The processor accesses data from one of the main memory or the at least one cache memory.Type: GrantFiled: March 31, 1998Date of Patent: March 13, 2001Assignee: Intel CorporationInventors: Salvador Palanca, Niranjan L. Cooray, Angad Narang, Vladimir Pentkovski, Steve Tsai
-
Patent number: 6192450Abstract: Data in a write cache is coalesced together prior to each destage operation. This results in higher performance by destaging a large quantity of data from the cache with each destage operation. A root item of data is located, and then a working set of data is collected by identifying additional data in the cache that will be destaged to locations in the storage device adjacent to the root item of data. The root item of data may be identified by starting at the location of the least recently accessed data in the cache, and then selecting a root item of data at a lower storage device address than the least recently accessed data, or may be chosen from a larger than average group of data items that were stored together into the cache. To speed execution, data items are added to a working set by, where possible, scanning an queue of data items kept in access order to locate data items at adjacent storage locations.Type: GrantFiled: February 3, 1998Date of Patent: February 20, 2001Assignee: International Business Machines CorporationInventors: Ellen Marie Bauman, Robert Edward Galbraith, Mark A. Johnson
-
Patent number: 6173381Abstract: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance. The memory controller (IMC) of the present invention preferably sits on the main CPU bus or a high speed system peripheral bus such as the PCI bus and couples to system memory. The IMC preferably uses a lossless data compression and decompression scheme. Data transfers to and from the integrated memory controller of the present invention can thus be in either two formats, these being compressed or normal (non-compressed). The IMC also preferably includes microcode for specific decompression of particular data formats such as digital video and digital audio. Compressed data from system I/O peripherals such as the hard drive, floppy drive, or local area network (LAN) are decompressed in the IMC and stored into system memory or saved in the system memory in compressed format.Type: GrantFiled: August 8, 1997Date of Patent: January 9, 2001Assignee: Interactive Silicon, Inc.Inventor: Thomas A. Dye
-
Patent number: 6170047Abstract: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance. The memory controller (IMC) of the present invention preferably sits on the main CPU bus or a high-speed system peripheral bus such as the PCI bus and couples to system memory. The IMC preferably uses a lossless data compression and decompression scheme. Data transfers to and from the integrated memory controller of the present invention can thus be in either of two formats, these being compressed or normal (non-compressed). The IMC also preferably includes microcode for specific decompression of particular data formats such as digital video and digital audio. Compressed data from system I/O peripherals such as the hard drive, floppy drive, or local area network (LAN) are decompressed in the IMC and stored into system memory or saved in the system memory in compressed format.Type: GrantFiled: December 14, 1999Date of Patent: January 2, 2001Assignee: Interactive Silicon, Inc.Inventor: Thomas A. Dye
-
Patent number: 6148374Abstract: An expandable-set, tag, cache circuit for use with a data cache memory comprises a tag memory divided into a first set and a second set for storing, under a single address location, first and second tag fields representative of first and second data, respectively. The tag memory also stores first and second signals representative of which of the sets is the least recently used. A comparator is responsive to a tag field of an address representative of requested data as well as to a first tag field output from the tag memory for producing an output signal indicative of a match therebetween. A second comparator is responsive to the same tag field of the address and to a second tag field output from the tag memory for producing an output signal indicative of a match therebetween. A first logic gate is responsive to the first and second comparators for producing an output signal indicative of the availability of the requested data in the data cache memory.Type: GrantFiled: July 2, 1998Date of Patent: November 14, 2000Assignee: Micron Technology, Inc.Inventor: J. Thomas Pawlowski
-
Patent number: 6145057Abstract: A method and system for managing a cache including a plurality of entries are described. According to the method, first and second cache operation requests are received. In response to receipt of the second cache operation request, an entry among the plurality of entries is identified for replacement. In response to a conflict between the first and the second cache operation requests, an entry among the plurality of entries other than the identified entry is replaced. In one embodiment, the alternative entry is replaced if the first cache operation request specifies the entry identified for replacement in response to the second cache operation request.Type: GrantFiled: April 14, 1997Date of Patent: November 7, 2000Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson
-
Patent number: 6141731Abstract: Disclosed is a cache management scheme using multiple data structure. A first and second data structures, such as linked lists, indicate data entries in a cache. Each data structure has a most recently used (MRU) entry, a least recently used (LRU) entry, and a time value associated with each data entry indicating a time the data entry was indicated as added to the MRU entry of the data structure. A processing unit receives a new data entry. In response, the processing unit processes the first and second data structures to determine a LRU data entry in each data structure and selects from the determined LRU data entries the LRU data entry that is the least recently used. The processing unit then demotes the selected LRU data entry from the cache and data structure including the selected data entry. The processing unit adds the new data entry to the cache and indicates the new data entry as located at the MRU entry of one of the first and second data structures.Type: GrantFiled: August 19, 1998Date of Patent: October 31, 2000Assignee: International Business Machines CorporationInventors: Brent Cameron Beardsley, Michael Thomas Benhase, Douglas A. Martin, Robert Louis Morton, Mark A. Reid
-
Patent number: 6138211Abstract: In a high performance microprocessor adopting a superscalar technique, necessarily using a cache memory, TLB, BTB and etc. and being implemented by 4-way set associative, there is provided an LRU memory capable of performing a pseudo replacement policy and supporting multi-port required for operating various blocks included in the microprocessor. The LRU memory comprises an address decoding block for decoding an INDEX.sub.-- ADDRESS to produce a READ.sub.-- WORD and a WRITE.sub.-- WORD in response to the first phase and a second phase of the CLOCK signal, respectively; an LRU storing block; a way hit decoding block for decoding a WAY.sub.-- HIT to produce a MODIFY CONTROL signal in response to the second phase of the CLOCK signal; a data modifying block for latching a READ.sub.-- DATA from the LRU storing block to produce a DETECTED DATA and modifying it in response to the MODIFY CONTROL signal so as to produce a WRITE.sub.Type: GrantFiled: June 30, 1998Date of Patent: October 24, 2000Assignee: Hyundai Electronics Industries Co., Ltd.Inventors: Mun Weon Ahn, Hoai Sig Kang
-
Patent number: 6128713Abstract: An application programming interface (API) enables application programs in a multitasking operating environment to control the allocation of physical memory in a virtual memory system. One API function enables applications to designate a soft page lock for code and data. The operating system ensures that the designated code and data is in physical memory when the application has the focus. When the application loses the focus, the pages associated with the code or data are released. When the application regains the focus, the operating system re-loads the pages into physical memory before the application begins to execute. The operating system is allowed to override the soft page lock where necessary. Another API enables applications to designate code or data that should have high priority access to physical memory, without using a lock.Type: GrantFiled: September 24, 1997Date of Patent: October 3, 2000Assignee: Microsoft CorporationInventors: Craig G. Eisler, G. Eric Engstrom
-
Patent number: 6125433Abstract: An optimized translation lookaside buffer (TLB) utilizes a least-recently-used algorithm for determining the replacement of virtual-to-physical memory translation entries. The TLB is faster and requires less chip area for fabrication. In addition to speed and size, the TLB is also optimized since many characteristics of the TLB may be changed without significantly changing the overall layout of the TLB. A TLB generating program may thus be used as a design aid. The translation lookaside buffer includes a level decoding circuit which allows masking of a variable number of the bits of a virtual address when it is compared to values stored within the TLB. The masking technique may be used for indicating a TLB hit or miss of a virtual address to be translated, and may also be used for invalidating selected entries within the TLB.Type: GrantFiled: May 17, 1995Date of Patent: September 26, 2000Assignee: LSI Logic CorporationInventors: Jens Horstmann, Yoon Kim
-
Patent number: 6119121Abstract: A method for maintaining login service parameters includes a step of allocating space for and storing a login service parameter portion of a logged in port. A login service parameter of a logged in port is then compared with stored login service parameter structures. If the login service parameter of the logged in port, except for a login service parameter portion thereof, is identical with one of the stored login service parameters, a step of adding a first pointer to that stored login service parameters structure into the stored login service parameter portion structure is carried out. A new login service parameter portion structure is allocated and the process repeated, thereby creating a linked list of login service parameter portion structures, each login service parameter portion structure pointing to both the stored login service parameter structure and to a next login service parameter portion structure.Type: GrantFiled: November 20, 1998Date of Patent: September 12, 2000Assignee: LSI Logic CorporationInventor: Jieming Zhu
-
Patent number: 6105115Abstract: A NRU algorithm is used to track lines in each region of a memory array such that the corresponding NRU bits are reset on a region-by-region basis. That is, the NRU bits of one region are reset when all of the bits in that region indicate that their corresponding lines have recently been used. Similarly, the NRU bits of another region are reset when all of the bits in that region indicate that their corresponding lines have recently been used. Resetting the NRU bits in one region, however, does not affect the NRU bits in another region. A LRU algorithm is used to track the regions of the array such that each region has a single corresponding entry in a LRU table. That is, all the lines in a single region collectively correspond to a single LRU entry. A region is elevated to most recently used status in the LRU table once the NRU bits of the region are reset.Type: GrantFiled: December 31, 1997Date of Patent: August 15, 2000Assignee: Intel CorporationInventors: Gregory S. Mathews, Dean A. Mulla
-
Patent number: 6078998Abstract: A single queue is utilized for scheduling of prioritized requests having specific deadlines in which to be serviced. New requests are initially inserted into the single queue based upon optimal SCAN order. Once the new request is inserted, the deadlines of all the requests in the queue are checked in order to insure each request deadline is met. In the event a deadline violation is identified, the queue is reorganized by identifying the lowest priority request currently to be processed prior to the request with the deadline violation. If more than one request with the lowest priority exists, the lowest priority request with the greatest deadline slack is selected. Ultimately, the selected request is moved to the tail of the queue, or removed from the queue and considered lost if its deadline is violated with a queue tail placement. This process is repeated until the queue is in a state with no deadline violations.Type: GrantFiled: February 11, 1997Date of Patent: June 20, 2000Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Ibrahim Mostafa Kamel, Thirumale Niranjan, Shahram Ghandeharizadeh
-
Patent number: 6078995Abstract: Two techniques are provided for implementing a least recently used (LRU) replacement algorithm for multi-way associative caches. A first method uses a special encoding of the LRU list to allow write only update of the list. The LRU list need only be read when a miss occurs and a replacement is needed. In a second method, the LRU list is integrated into the tags for each "way" of the multi-way associative cache. Updating of the list is done by writing only the "way" of the cache that hits.Type: GrantFiled: December 26, 1996Date of Patent: June 20, 2000Assignee: Micro Magic, Inc.Inventors: Gary Bewick, John M. Golenbieski
-
Patent number: 6070225Abstract: Access can be optimized to data strings hierarchically organized on a single disk drive where the data strings are address map defined and recorded in bands of contiguous tracks on a frequency usage basis. The bands are arranged such that each of the most frequently used strings are out-of-phase recorded several times on a counterpart track where the band is located toward the outer disk diameter. The least frequently used strings and sequential data strings are stored elsewhere on the same or other surfaces. A group of three or more bands provides for a more refined partitioning of the data strings on a frequency of usage or a recency of usage basis. The read/write transducer has its idle position over the outer diameter.Type: GrantFiled: June 1, 1998Date of Patent: May 30, 2000Assignee: International Business Machines CorporationInventors: Wayne Cheung, Mohammed Amine Hajji
-
Patent number: 6067608Abstract: The main storage of a system includes a virtual memory space containing a plurality of virtual frame buffers for storing information transferred from disk storage shared by a number of virtual processes being executed by the system. An associated buffer table and aging mechanism includes a buffer table storing a plurality of buffer table entries associated with the corresponding number of virtual buffers used for controlling access thereto and an age table containing entries associated with the buffer table entries containing forward and backward age pointers linked together defining the relative aging of the virtual frame buffers from the most recently used to least recently used. Each buffer table entry has a frequency reference counter which maintains a reference count defining the number of times that its associated virtual buffer has been uniquely accessed by the virtual processes.Type: GrantFiled: March 27, 1998Date of Patent: May 23, 2000Assignee: Bull HN Information Systems Inc.Inventor: Ron B. Perry
-
Patent number: 6065006Abstract: The set up information associated with at least some of a DVD disc's titles are stored in a DVD player's local memory. Items are chosen for storage based upon the likelihood that a title will be played. The likelihood that a title will be played is balanced against the availability of local memory for storing this information. Titles are ranked according to the likelihood they might be played and titles of lower rank may be purged from the local memory, or title cache, set aside for this task. Six basic criteria are used to rank a title as extremely likely, highly likely, likely, or not likely to be played. A title ranked extremely likely to be played has top caching priority, one that is highly likely to be played has the second highest caching priority, and so on. Each time a title's set up information is read, the title is ranked for caching. Additionally, the state of the title cache is stored every time a user plays a DVD.Type: GrantFiled: February 5, 1998Date of Patent: May 16, 2000Assignee: Oak Technology, Inc.Inventors: Linden A. deCarmo, Amir M. Mobini
-
Patent number: 6055612Abstract: An incremental garbage collector which permits a memory allocator's decommit mechanism to operate while the garbage collector is detecting memory that a program being executed is certainly not using. The garbage collector includes a decommit barrier which prevents the garbage collector from referencing memory that the allocator has decommitted from the address space of the process on which the program is executing. In mark-sweep incremental garbage collectors, the decommit barrier may be implemented in two ways: by means of a table which the allocator marks whenever it determines that a portion of memory is subject to being decommitted from the process's address space and which the garbage collector examines before scanning the portion and by means of a table which the garbage collector marks when it finds that a portion of memory must be scanned and which the allocator examines before decommitting the portion.Type: GrantFiled: July 11, 1997Date of Patent: April 25, 2000Assignee: Geodesic Systems, Inc.Inventors: Michael Spertus, Gustavo Rodriguez-Rivera, Charles Fitterman
-
Patent number: 6032233Abstract: A set of storage devices together with a method for storing data to the storage devices and retrieving data from the storage devices is presented. The set of storage devices provide the function of a multi-writeport cell through the use of a set of single-writeport cells. The storage devices allow for multiple write accesses. Information contained in the set of storage device is represented by all of the devices together. The stored information may be retrieved via a read operation which accesses a subset of the set of storage devices. A write operation is a staged operation: First, the contents of all of the storage devices which are not to be modified are read. Next, the values that are to be written to a subset B of the set of storage devices are calculated in a way that the contents and the values of subset B together represent the desired result.Type: GrantFiled: July 1, 1997Date of Patent: February 29, 2000Assignee: International Business Machines CorporationInventors: Peter Loffler, Erwin Pfeffer, Thomas Pfluger, Hans-Werner Tast
-
Patent number: 6026471Abstract: According to the present invention, an anticipating cache memory loader is provided to "pre-load" the cache with the data and instructions most likely to be needed by the CPU once the currently executing task is completed or interrupted. The data and instructions most likely to be needed after the currently executing task is completed or executed is the same data and instructions that were loaded into the cache at the time the next scheduled task was last preempted or interrupted. By creating and storing an index to the contents of the cache for various tasks at the point in time the tasks are interrupted, the data and instructions previously swapped out of the cache can be retrieved from main memory and restored to the cache when needed. By using available bandwidth to pre-load the cache for the next scheduled task, the CPU can begin processing the next scheduled task more quickly and efficiently than if the present invention were not utilized.Type: GrantFiled: November 19, 1996Date of Patent: February 15, 2000Assignee: International Business Machines CorporationInventors: Kenneth Joseph Goodnow, Clarence Rosser Ogilvie, Wilbur David Pricer, Sebastian Theodore Ventrone
-
Patent number: 6023747Abstract: A method and system for managing a cache including a plurality of entries are described. According to the method, first and second cache operation requests are received at the cache. In response to receipt of the first cache operation request, which specifies a particular entry among the plurality of entries, a single access of a coherency state associated with the particular entry is performed. Thereafter, in response to receipt of the second cache operation request, a determination is made whether servicing the second cache operation request requires replacement of one of the plurality of entries. In response to a determination that servicing of the second cache operation request requires replacement of one of the plurality of entries, an entry is identified for replacement. If the identified entry is the same as the particular entry specified by the first cache operation request, the identified entry is replaced only after servicing the first operation request.Type: GrantFiled: December 17, 1997Date of Patent: February 8, 2000Assignee: International Business Machines CorporationInventor: John Steven Dodson
-
Patent number: 6021470Abstract: A method for selectively caching data in a computer network. Initially, data objects which are anticipated as being accessed only once or seldomly accessed are designated as being exempt from being cached. When a read request is generated, the cache controller reads the requested data object from the cache memory if it currently resides in the cache memory. However, if the requested data object cannot be found in the cache memory, it is read from a mass storage device. Thereupon, the cache controller determines whether the requested data object is to be cached or is exempt from being cached. If the data object is exempt from being cached, it is loaded directly into a local memory and is not stored in the cache. This provides improved cache utilization because only objects that are used multiple times are entered in the cache. Furthermore, processing overhead is minimized by reducing unnecessary cache insertion and purging operations.Type: GrantFiled: March 17, 1997Date of Patent: February 1, 2000Assignee: Oracle CorporationInventors: Richard Frank, Gopalan Arun, Richard Anderson, Rabah Mediouni, Stephen Klein
-
Patent number: 5983313Abstract: The method and apparatus of the current invention relates to an intelligent cache management system for servicing a main memory and a cache. The cache resources are allocated to segments of main memory rows based on a simple or complex allocation process. The complex allocation performs a predictive function allocating scarce resources based on the probability of future use. The apparatus comprises a main memory coupled by a steering unit to a cache. The steering unit controls where in cache a given main memory row segment will be placed. The operation of the steering unit is controlled by an intelligent cache allocation unit. The unit allocates new memory access requests cache locations which are least frequently utilized. Since a given row segment may be placed anywhere in a cache row, the allocation unit performs the additional function of adjusting the column portion of a memory access request to compensate for the placement of the requested segment in the cache.Type: GrantFiled: April 10, 1996Date of Patent: November 9, 1999Assignee: Ramtron International CorporationInventors: Doyle James Heisler, James Dean Joseph, Dion Nickolas Heisler
-
Patent number: 5974512Abstract: A system for saving contents of a plurality of registers into a memory. The system has a bit sequence, wherein the value of each individual bit of the bit sequence is set to indicate a modification status of a corresponding register; and control means for saving contents of each of the registers indicated to have been modified at a predetermined address of the memory and for revising the predetermined address. In a preferred implementation, the system saves contents of a plurality of registers into a first area of a memory and for restores contents of the plurality of registers with contents previously saved in a second area of the memory.Type: GrantFiled: February 7, 1997Date of Patent: October 26, 1999Assignee: NEC CorporationInventor: Masakazu Chiba
-
Patent number: 5956744Abstract: A multilevel hierarchical least recently used cache replacement priority in a digital data processing system including plural memories, each memory connected to said system bus for memory access, a memory address generator generating addresses for read access to a corresponding of the memories and a memory cache having a plurality of cache entries, each cache entry including a range of addresses and a predetermined set of cache words. During each memory read the comparator compares the generated address with the address range of each cache entry. If there is a match, then the cache supplies a cache word corresponding to the least significant bits of the generated address from the matching cache entry. If there is no such match, the generated address is supplied to the memories and a set of words is recalled corresponding to the generated address. This set of words replaces a least recently used prior stored memory cache entry having the lowest priority level.Type: GrantFiled: September 6, 1996Date of Patent: September 21, 1999Assignee: Texas Instruments IncorporatedInventors: Iain Robertson, Karl M. Guttag, Eric R. Hansen