Interruptible cache loading to assure immediate data access from cache

- IBM

A system for interrupting loading of data into a high speed memory device from main storage when a processor requests cache access. A high speed cache is connected to main storage for storing at least a subset of the data residing therein, and the cache can be directly accessed by a processor. In a preferred embodiment, a buffering device is connected to main storage and to the cache for buffering data to be loaded therein. The data buffer is adapted to receive data from main storage continuously and is adapted to transfer the data to the cache continuously unless the cache is being accessed by the processor.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

The present invention relates to a data processing system having a main memory and a high speed memory and, more particularly, to an improved memory access management mechanism for controlling data transfer therebetween.

In the field of data processing, sophisticated high speed computers often incorporate large memories or data storage devices. While the speed of the engines or processors within such computer systems has consistently increased over the years, so too have computer applications continued to demand ever greater speeds.

Among the many variables to be considered in an attempt to increase the performance of data processing systems, two considerations are the speed of the system processor and the speed with which data can be transferred between the main storage and the processor. In general, when two or more logic devices are incorporated in a computer system, one of the devices operates at a slower rate of speed than do the remaining devices. Overall system performance is of course dependent upon the speed of the slowest logical device.

The speed of a memory device is inversely proportional to the time required to access data stored therein. As sophisticated computer systems develop, memory storage capacity often increases. Although the operating speed of the smallest components may increase, overall system performance may, in fact, degenerate when memory capacity is extremely large.

Historically, it was common for a processor to communicate with main storage by means of individual connections thereto. The great increase in processing power provided by modern processors, however, resulted in a prodigious amount of data constantly being requested by the processor, exceeding the capacity of the main storage to transfer data to the processor at optimal rates. The size of the memory required for use also increased at a faster rate than that of processor improvement. It would have been uneconomical to continue building nonvolatile memories of ever increasing size and speed.

An approach to maximizing performance of a computer system was to develop a temporary memory storage mechanism called a cache. The cache is a relatively high speed memory that tends to be more expensive than conventional data storage devices.

The cache is a limited storage capacity memory that is usually local to the processor and that contains a time-varying subset of the contents of main storage. This subset of data stored in the cache is that data that was recently used by the processor.

The purpose of a cache memory is to reduce cost of a system while minimally affecting the average effective access time for a memory reference. A very high proportion of memory reads can be satisfied out of the high speed cache.

The cache contains a relatively small high speed buffer and control logic situated between two logical devices, such as a processor and main storage. The cache matches the high speed of one of the devices (the processor) to the relatively low speed of the other device (the main storage).

The data most often used is temporarily stored in the high speed buffer. The most recent information requested by one logical device from another logical device is stored in the cache memory simultaneously with its transfer to the first device. Subsequent requests for such information result in the transfer of data directly from the cache to the first device without need for accessing the second device.

When a processor, for example, requests data, a cache first searches its buffer. If the data is stored in the cache, a so-called hit occurs. The data is returned in one or two cycles. Often, of course, the data sought is not stored in the cache. Consequently, a so-called miss occurs and the cache must retrieve the data from main storage.

Caches derive their performance from the principle of locality. According to this principle, over short periods of time processor memory references tend to be clustered in both time and space. Data that will be in use in the near future is likely to be currently in use. Similarly, data that will be in use in the near future is located near data currently in use. The degree to which systems exhibit locality determines the benefits of the cache. A cache can contain a small fraction of the data stored in main storage, yet still have hit rates that are extremely high under normal system loads.

A main storage line fetch occurs when the cache accesses data from main storage. A line castout occurs when a convenient block of data, called a line or cache line, is returned to main storage from the cache after modification to make room for a new line of data. The line of data is the unit which is moved between the cache and main storage and is typically 4 to 16 times longer than the width of the bus between the cache and main storage. This incompatibility results in multiple transfers of data between memory and cache to complete a data transfer operation.

The loading of the cache device is sometimes called inpaging. Inpaging of data from main storage to the cache may take an appreciable amount of time, depending upon the amount of data that is transferred. Often particular data within a line which the processor requests is fetched from main storage first and passed to the processor so that it can resume instruction processing. The remainder of the line is inpaged into the cache immediately afterward. If, during the inpaging operation, a processor requires access to data in the cache, conventionally the processor has been required to wait until the line transfer operation from main storage to the cache was completed.

U.S. Pat. No. 4,317,168 issued to Messina et al discloses a cache organization that enables cache functions to overlap. The main storage has two lines: data bus-out and data bus-in, each transferring a double word in one cycle. Both buses may transfer respective double words in opposite directions in the same cycle. Moreover, the cache has a quadword write register and a quadword read register, a quadword meaning two double words on a quadword address boundary. During a line fetch of sixteen double words, the first double word or the pair of double words is loaded into the quadword write register. Thereafter, during the line fetch the even and odd double words are formed into quadwords as received from the bus-out and the quadwords are written into the cache on alternate cycles. If a line castout is required from the same or a different location in the cache, the castout can proceed during the alternate non-write cycles of any line fetch. Any cache bypass to the processor during the line fetch can overlap the line fetch and the line castout. Although processor accesses are permitted in the aforementioned system, they receive lower priority than do memory transfers.

U.S. Pat. No. 4,169,284 issued to Hogan et al teaches concurrent access to a cache by main storage and a processor by means of a cache control which provides two cache access timing cycles during each processor storage request cycle. No alternatively accessible modules, buffering, delay or interruption is provided for main storage line transfers to the cache.

U.S. Pat. No. 4,371,929 issued to Brann et al teaches a controllable cache store interface to a shared disk memory employing a plurality of storage partitions whose access is interleaved in a time domain multiplexed manner. A common bus is provided with the shared disk to enable high speed sharing of the disk storage by all processors in a multiprocessor system. The communication between each processor and its corresponding cache memory partition can be overlapped with one another and with accesses between the cache memory and the commonly shared disk memory. Interleaving of transfers within full disk block transfers, however, is not permissible. Thus, processor access to the cache memory is halted until data is transferred from the cache to the disk drives.

It would be advantageous to provide a system for improving performance of a high speed computer.

It would also be advantageous to provide a system for efficient data management between a cache and a main storage in such a high speed computer system.

It would further be advantageous to provide a system for interrupting cache operations when a processor requests access to the data stored therein.

Moreover, it would be advantageous to allow a processor to have highest access to the cache, even when data transfer operations are occurring between the cache and main storage.

It would also be advantageous to provide a system for buffering data from main storage to the cache so that data can be loaded continuously into the cache unless the processor requests access to the data.

SUMMARY OF THE INVENTION

In accordance with the present invention, there is provided a system for interrupting loading of data from main storage into a high speed memory device when a processor requests cache access. A high speed cache is connected to main storage for storing at least a subset of the data residing therein and for accessing data from a processor. A buffering device is connected to main storage and to the cache for buffering data to be loaded therein. The data buffer is adapted to receive data from main storage continuously and is adapted to transfer the data to the cache continuously unless the cache is being accessed by the processor.

BRIEF DESCRIPTION OF THE DRAWINGS

A complete understanding of the present invention may be obtained by reference to the accompanying drawings, when taken in conjunction with the detailed description thereof and in which:

FIG. 1 is a block diagram of a data processing system environment which provides interpretable cache loading in accordance with the present invention;

FIGS. 2A-2C, taken together, represent a flowchart of a preferred embodiment which includes an inpage buffer operation;

FIGS. 3A-3C, taken together, represent a flowchart showing interrupted data transfer from memory in an alternate embodiment having no inpage buffer;

FIG. 4 is a timing diagram representing the occurrence of events during an inpage operation of the preferred embodiment of FIG. 2; and

FIG. 5 is a timing diagram representing the occurrence of events in the alternate embodiment of FIG. 3 during which transfer of data from memory is suspended.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT AND ALTERNATE EMBODIMENTS

Referring now to FIG. 1, there is shown a block diagram of a high speed buffer subsystem, shown generally at reference numeral 10, of the data processing system of the present invention. The main storage 11 has capacity for storing a plurality of data bits and words. In the preferred embodiment, the main storage 11 has a capacity on the order of 8 M bytes to 256 M bytes of data and can be accessed randomly. It should be understood, however, that any reasonable capacity of memory can be used within the scope of the present invention.

Connected to main storage 11 by means of a bidirectional data bus or memory bus 12 is an 8-byte wide data register 16. An inpage buffer 18 is connected to data register 16 by means of bus 19. The inpage buffer 18 is a component of the preferred embodiment, but need not be used in alternate embodiments. Its function is described in further detail hereinbelow. An 8-byte wide multiplexer 20 is connected to the inpage buffer 18 by means of a bus 21.

A set of cache arrays is shown at reference numeral 22. The cache arrays 22 in the preferred embodiment comprise four partitions each 8 bytes wide, not shown, the total memory being 16 K bytes. It should be understood, however, that a cache array having six partitions and 24 K bytes could also be used. In fact, any size cache array and any number of partitions is possible within the scope of the present invention. Similarly, for purposes of description, one set of cache arrays 22 is herein disclosed, but alternative embodiments having a plurality of caches would also be within the scope of the present invention and could easily be implemented by those skilled in the art once the present invention is understood.

The cache arrays 22 are connected to the multiplexer 20 by means of a bus 23. Another data register 24 is connected to the cache arrays 22 by means of a bus 25.

Another multiplexer 26 is connected to the data register 24 by means of a bus 27. Other inputs to the multiplexer 26 are data from the inpage buffer 18 by means of a bus 29 and data from data register 16 by means of a bus 19.

The output of multiplexer 26 is applied to a shifter 28 by means of a bus 31. The output of multiplexer 26 can also be applied, by means of bus 31, to an outpage data register 32, the output of which is applied to the bidirectional data bus 12.

The shifter 28 generates data signals and transmits them over a bus 14 to a processor 13. Data from the processor 13 is applied to a store data register 30 by means of a bus 33 and thence, by means of a bus 33a, to the multiplexer 20 during store operations.

The cache directory and controls are shown generally at reference numeral 40. A directory 42, having a least recently used (LRU) mechanism 43, provides a portion of addresses to a compare circuit 44 by means of lines 45. The output of the compare circuit 44 is applied to inpage controls 46 over HIT/MISS lines 47. The inpage controls 46 determine whether valid data is in the cache arrays 22 or in the inpage buffer 18. Priority controls 48 signal main storage 11 to discontinue and restart data transfers if the inpage buffer 18 is not present, as can be the case in an alternate embodiment of the present invention.

The inpage controls 46 generate a signal applied to priority controls 48 over INPAGE REQUEST line 49. The priority controls 48 determine whether the processor 13 or the inpage operation from main storage 11 will access the cache arrays 22. The output of priority controls 48 is applied to a multiplexer 50 by means of a CACHE ADDRESS SELECT line 51. The output of the multiplexer 50 is applied to an address register 52 by means of a bus 53. The address register 52 generates ADDRESS signals on bus 55 which is applied to the cache arrays 22.

Another multiplexer 70 receives ADDRESS signals from the processor 13 by means of a PROCESSOR ADDRESS bus 56. Also applied to multiplexer 70 is an INPAGE ADDRESS bus 60 and the CACHE ADDRESS SELECT line 51. Connected to multiplexer 70 by means of bus 71 is an address register 54.

The priority controls 48 receive a PROCESSOR REQUEST signal from the processor 13 by means of line 58.

Also input to the multiplexer 50 are the PROCESSOR ADDRESS signals 56 from the processor 13 and the INPAGE ADDRESS signals 60 generated by the inpage controls 46. The inpage controls 46 also generate three groups of SELECT lines: SELECT/CONTROL 62, which lines are input to inpage buffer 18; CACHE DATA INPUT SELECT 64, which line is input to multiplexer 20; and CACHE DATA OUTPUT SELECT 66, which lines are input to multiplexer 26.

The directory 42 receives an address from address register 54 by means of bus 68.

In operation, the data from main storage 11 is latched in data register 16, from which it is routed to the cache arrays 22 or to the inpage buffer 18, if present, for an inpage operation. The inpage buffer 18 allows a complete line of data to be transferred from main storage 11 without interruption, as far as the main storage 11 is concerned. This frees the memory bus 12 for other operations, such as accesses by other processors, not shown, or store operations. The inpage buffer 18 also allows an inpaging operation to restart more quickly after an interruption in most cases, since the buffer 18 is integrated into the cache structure in the preferred embodiment.

From the inpage buffer 18, data is written into the cache arrays 22 during those cycles in which the processor 13 does not request cache accesses. Data can be written into the cache arrays 22 in any convenient size block, depending on cache array organization. For example, the cache array 22 may be organized to write 16 bytes of data in a single access, even though the memory bus 12 is only 8 bytes wide.

Data from the inpage buffer 18 can also be gated through multiplexer 26 via bus 29 to the processor 13 to allow early access to data not yet written into the cache arrays 22. In practice, all data from the cache line being inpaged are obtained from the inpage buffer 18, except for the first access which is bypassed directly from data register 16 to the processor 13, until the complete cache line is written into the cache arrays 22 and the directory 42 is marked valid.

The multiplexer 26, in addition to selecting data from the inpage buffer 18 and data register 16, selects data from one of the associativity classes in the cache arrays 22. Thus, the inclusion of data from the inpage buffer 18 does not necessarily add a stage of logic to the cache data path that terminates at the processor 13. It does, however, increase the number of inputs to the multiplexer 26. Control of the multiplexer 26 is handled by logic in the inpage controls 46, which verifies that valid data is in the inpage buffer 18 and checks the output of directory compare logic 44 for those accesses to other cache lines.

Priority controls 48 determine whether the processor 13 is requesting the next cache access, and, if so, gates the PROCESSOR ADDRESS signal 56 instead of the inpage address to the directory 42 and cache arrays 22. If no PROCESSOR REQUEST signal 58 is present, the inpage buffer 18 loads data to the cache arrays 22 by means of the multiplexer 20 and buses 21 and 23 connected thereto. A more thorough understanding of the specific operations that occur during cache data transfer activity in the preferred embodiment using the inpage buffer 18 can be obtained by referring to FIGS. 2A-2C, which disclose a self-explanatory flowchart thereof. Reference numerals included in the blocks of FIGS. 2a-2c refer to the circuit elements shown in FIG. 1 and are numbered identically to the reference numerals identifying the elements therein.

In an alternate embodiment where no inpage buffer 18 exists, a RESEND signal 72 is sent to main storage 11 requesting that the data be retransmitted and data from register 16 is applied directly to multiplexers 20 and 26, the latter over bus 19. A more thorough understanding of the specific operations that occur during transfer activity in such an alternate embodiment which relies on retransmission of data from main storage 11 to cache arrays 22 when interrupted by the processor 13 can be obtained by referring to FIGS. 3A-3C, which disclose a self-explanatory flowchart thereof. Reference numerals included in the blocks of FIGS. 3A-3C refer to the circuit elements shown in FIG. 1 and are numbered identically to the reference numerals identifying the elements therein.

Data validity for the cache line being inpaged is determined by a series of latches, not shown, in the inpage controls 46. The number of latches is determined by the number of transfers from main storage 11 to the cache arrays 22 for a complete cache line transfer. Each latch corresponds to a portion of the cache line and is set valid as the portion is loaded into the cache arrays 22 or into the inpage buffer 18. Another latch, not shown, is set when the inpage operation begins and is reset when the operation is completed. This procedure indicates whether the inpage controls 46 are to be used.

Prior to loading the complete cache line into the cache arrays 22, the processor 13 may attempt to read from or write to any storage location. If a fetch is attempted for data which is not yet in the inpage buffer 18, the processor 13 is signalled to wait until valid data is available. If the processor 13 attempts to access another line in the cache arrays 22 and a miss occurs, a BUSY condition is sent over line 74, which causes the processor 13 to wait and then to resend the request when the BUSY condition is reset. Thus, the access to main storage 11 does not begin until the first inpage operation is complete. Data from the processor 13 is not written into the inpage buffer 18 or into the cache arrays 22 for the cache line being inpaged, should a write operation be required to the cache line being inpaged. Instead, the BUSY signal is sent to the processor 13 over line 74. The processor 13 waits until the inpage operation is complete, at which time the data from the processor 13 is written over the inpaged data. The latter restriction can be removed in alternate embodiments in which the controls are more complex but still within the abilities of one skilled in the art.

Referring now also to the timing chart of FIGURE 4, data lines labelled "data 0" through "data 7" are transferred sequentially as follows, pursuant to a series of fetch requests identified as `A`, `B`, and so on. A PROCESSOR REQUEST signal 58 (FIG. 1) is sent to the cache arrays 22 with the address for the first fetch (fetch `A`), followed immediately by fetch `B`, which is held, pending data returning to the processor 13 for fetch `A`. The first access to the cache arrays 22 and directory 42 occurs in cycle 2. The directory 42 indicates that an inpage operation is required. The data fetched from the cache arrays 22 on cycle 2 is discarded. On cycle 3 the address is presented to main storage 11 and the access begins. Several cycles later, "data 0" (the part of the cache line initially requested by the processor 13 in fetch `A`) is placed on the bus 12 to the data register 16. On the following cycle, "data 0" is transferred to the processor 13 through multiplexer 26 and concurrently fetch `B` is allowed. In this example, since the fetch is for data in another cache line present in the cache arrays 22, the appropriate cache output is selected by the multiplexer 26 in the next cycle.

Two cycles later, fetch `C` of the cache line being inpaged occurs. A directory miss occurs because the cache line is not yet completely stored in the cache arrays 22 and marked valid. The inpage controls 46 determine that the requested data is in the inpage buffer 18 and select the correct buffer location using SELECT/CONTROL lines 62, routing the data through the multiplexer 26 to the processor 13. When "data 7" is finally written into the cache arrays 22, the entry in the directory 42 is marked valid. Subsequent accesses use cache array data.

Referring now also to FIG. 5, when the inpage buffer 18 does not exist, the main storage 11 can resend the last data transferred to the cache arrays 22. Operations then proceed as hereinabove described, but the second transfer ("data 1") must be resent because "data 0" is loaded in data register 16 prior to being transferred to the cache arrays 22. The cache arrays 22 are currently executing fetch `B`. The priority controls 48 detect this situation and activate the RESEND line 72 to the main storage 11. RESEND is also activated later when fetch `C` accesses the cache arrays 22. The effect of this mode of operation is similar to the preferred embodiment in which the inpage buffer 18 exists, except that the memory bus 12 is in use for a greater period of time in order to resend the interrupted transfers. Moreover, the memory system 11 must be capable of resending data upon receiving such requests from the priority controls 48.

Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention .

Claims

1. A system which operates on clock cycles and which provides immediate access to data which is already located in a high speed memory device including:

processor means for processing data;
storage means for storing data and which is operatively connected to said data processing means;
a high speed cache memory operatively connected through a first bus to said storage means for storing at least a subset of data from said storage means, with each unit of said data requiring multiple clock cycles to be loaded through said first bus into said cache memory, with said cache memory accessible to said processor means through a second bus; and
priority control means comprising logic circuits connected with both said processor means and said cache memory for monitoring data requests received from said processor means and for immediately interrupting in a given clock cycle any further loading of a unit of said data from said storage means to said cache memory only whenever such a data request is for data located in said cache memory, and for preventing any delay after said given clock cycle by allowing the data request for data located in said cache memory to be carried out concurrently with said interrupting and by allowing resumption of loading of said unit of data during successive clock cycles following said given clock cycle.

2. A system in accordance with claim 1 which further includes buffer means operatively connected between said storage means and said cache memory for temporarily holding a subset of data received from said storage means prior to transferring the subset of data to said cache m emory, wherein said data buffering means is adapted to continuously receive data from said data storing means and is further adapted to transfer such data to said cache memory only when said cache memory is not being accessed by said processor means.

3. The system in accordance with claim 2 further comprising means for allowing said processor means to access data directly from said buffer means when said data has not yet been completely transferred therefrom.

4. The system in accordance with claim 2 further comprising controlling means operatively connected to said buffer means to control the operation thereof.

5. The system in accordance with claim 4 wherein said controlling means includes a directory and comparator means operatively connected to said directory for verifying addresses of data requested by said processor means.

6. The system in accordance with claim 2 which further includes multiplexer means with inputs operatively connected to said storage means, and also to said cache memory, and also to said buffer means, said multiplexer means having an output operatively connected to said processor means for the purpose of selecting the appropriate source of data to be transferred to said processor means.

7. The system in accordance with claim 1 further comprising multiplexer means with inputs operatively connected to said storage means, and also to said cache memory, said multiplexer means having an output operatively connected to said processor means for the purpose of selecting the appropriate source of data to be transferred to said processor means.

8. The system in accordance with claim 7 further comprising controlling means operatively connected to said multiplexer for controlling the operation thereof to allow a data request for data located in said cache memory to be carried out during the same given clock cycle in which the interrupting of further loading of said data from said storage means to said cache memory occurs.

9. The system in accordance with claim 1 which further includes register means connected with said first bus for retransmitting data from said storage means to said cache memory in the next successive clock cycle after said processor means has stopped directly accessing data located in said cache memory.

10. A method of using logic circuits for coordinating the transfer of lines of data from main storage into a high speed cache memory with the accessing by a processor of data which is already located in the cache memory, including the steps of:

loading data from the main storage into a buffer for temporary retention;
transferring the loaded data from the buffer into the cache memory during all periods of time when the processor is not requesting access to data which is already located in the cache memory;
monitoring data requests from the processor to identify any requests for data already located in cache memory;
interrupting immediately said transferring step, without interrupting said loading step, whenever said monitoring step identifies requests by the processor for data in the cache memory; and
resuming said transferring step immediately after the accessing by the processor of data in the cache memory has been completed.

11. The method of claim 10 wherein said loading step includes inpaging a line of data which requires a plurality of cycles to complete, and which further includes maintaining address registers and directories to identify which lines of data have been completely loaded into the cache memory.

Referenced Cited
U.S. Patent Documents
3938097 February 10, 1976 Niguette, III
4298929 November 3, 1981 Capozzi
4354232 October 12, 1982 Ryan
4370710 January 25, 1983 Kroft
4439829 March 27, 1984 Tsiang
4460959 July 17, 1984 Lemay et al.
4604691 August 5, 1986 Akagi
4631660 December 23, 1986 Woffinden et al.
4685082 August 4, 1987 Cheung et al.
4740889 April 26, 1988 Motersole et al.
4819203 April 4, 1989 Shiroyanagi et al.
Patent History
Patent number: 4964041
Type: Grant
Filed: Aug 24, 1987
Date of Patent: Oct 16, 1990
Assignee: IBM Corporation (Armonk, NY)
Inventors: Thomas L. Jeremiah (Endwell, NY), Albert J. Ruane (Endwell, NY), Frank A. Zurla (Binghamton, NY)
Primary Examiner: Gareth D. Shaw
Assistant Examiner: Rebecca L. Rudolph
Attorneys: Mark Levy, David S. Romney
Application Number: 7/88,753
Classifications
Current U.S. Class: 364/200; 364/2434; 364/239; 364/2391; 364/2412; 364/2421; 364/2426; 364/24341
International Classification: G06F 100;