Abstract: In allocating an area of a cache memory to each storage unit, proper allocation of the cache memory is made to each storage unit. If the amount of write-after data becomes equal to or more than a threshold value, an allocation limit is set to each disk unit. If CPU issues a data write request requiring the amount of data equal to or more than the allocation limit, the data write request is held in a wait state until the amount of write-after data becomes less than the allocation limit. Therefore, the allocation amount to the disk unit becomes neither too large nor too small. In this manner, proper allocation of the cache memory to each disk unit can be realized.
Abstract: A memory controller (42) supports a mode known as continuous page mode. The memory controller (42) is coupled to a pipelined internal bus and provides control signals to an external bus to control a memory such as a dynamic random access memory (DRAM) (43). The memory controller (42) compares the page portion of a next address to the page portion of the current address. If the addresses match, the memory controller (42) keeps the page open for the next cycle. However if the addresses do not match, or if the next address is not valid at this same point in time during the first access, the memory controller (42) closes the page during the first cycle. The memory controller (42) performs continuous page mode without incurring a penalty when the page is closed.
Abstract: Even in the case of a full hang state where a command to reset software is entirely ineffective, memory dump can be stored even if there is no in-circuit emulator. An HDD itself monitors a command execution time, that is, the time between reception of a command from a host computer and completion of the process of the command. If the HDD judges that it has taken an abnormally long time, a memory dump will be automatically stored on a reserved area on a disk. The memory dump can be read out from the disk at any time so that an analysis can be made.
Abstract: Data transferred from a host computer to a memory device is written into sectors whose addresses in a memory area are decoded by a decode table. Old data to be updated by the above data is erased or marked with erase flags. At a predetermined point of time, in order to create free areas, necessary data is evacuated to a primary memory media and unnecessary data indicated by erase flags is erased by a unit of predetermined memory size. Part of the memory media which has become defective is marked with a defect flag, and is replaced by an alternate area. In doing so, the decode table is rewritten to rearrange the memory area.
Abstract: A multi-port RAM (MPRAM) having a SRAM and a DRAM. A global bus is arranged between the DRAM and the SRAM to provide bi-directional transfer of 256-bit data blocks between the SRAM and the DRAM. Two independent input/output ports are coupled to the SRAM to enable a user to write or read data to or from the SRAM and DRAM. Byte masking is provided for each of the ports to mask bytes of data supplied to the MPRAM. A write-per-bit (WPB) mask register is arranged between the ports and the SRAM to prevent unnecessary bits of input data from being written into the SRAM. A byte write enable (BWE) mask register is arranged between the SRAM and the DRAM to prevent unnecessary bytes of data from being transferred from the SRAM to the DRAM. Each of the mask registers may be loaded with mask data from both of the ports concurrently, or from any one of them.
Type:
Grant
Filed:
January 13, 1998
Date of Patent:
August 8, 2000
Assignee:
Mitsubishi Semiconductor America, Inc.
Inventors:
William L. Randolph, Stephen Camacho, Rhonda Cassada
Abstract: A method is presented for determining whether to prefetch and cache documents on a computer. In one embodiment, documents are prefetched and cached on a client computer from servers located on the Internet in accordance with their computed need probability. Those document with a higher need probability are prefetched and cached before documents with lower need probabilities. The need probability for a document is computed using both a document context factor and a document history factor. The context factor of the need probability of a document is determined by computing the correlation between words in the document and a context Q of the operating environment. The history factor of the need probability of a document is determined by integrating both the recency of document use and the frequency of document use.
Abstract: A DRAM system is described that can prevent a substantial reduction in bandwidth with respect to a clock pulse frequency even when banks are accessed in no specific order. As a result, provided is a memory system constituted by DRAM whereby a seamless operation is assured not only for reading but also for writing.
Type:
Grant
Filed:
September 19, 1997
Date of Patent:
July 4, 2000
Assignee:
International Business Machines Corporation
Abstract: In an information processing system having a data processing apparatus, a control unit for a cache memory, and a storage unit for storing a record, respectively interconnected together, wherein when the control unit receives from the data processing apparatus a write request for a record to be written and if the record to be written is not being stored in the cache memory, the control unit receives a data to be written in the object record from the data processing apparatus and stores the received data in the cache memory. After notifying the data processing apparatus of a completion of a data write process, the control unit checks if the object record in which the data stored in the cache memory is being stored in the storage unit, if the object record is being stored in the storage unit, the data in the cache memory is written in the storage unit, and if not, the data in the cache memory is not written and such effect is notified to the data processing apparatus.
Abstract: A computer system comprises a main memory and a battery-backed memory (BRAM) for storing configuration information for use in initial program load (IPL). A read-only memory stores a master version of the configuration information. When the system is powered on, an IPL sequence is performed which checks the BRAM, and then, if the contents of the BRAM are valid, copies the control information from the BRAM into the main memory. Alternatively, if the contents of the BRAM are invalid, the IPL sequence copies the master version of the control information from the read-only memory into the BRAM and also into the main memory.
Abstract: Data read out from a disk storage device by a read thread is stored into a ring buffer on a predetermined unit basis. This process is independently controlled every element. The data is read out from the buffer by an output thread in accordance with a predetermined order and is outputted as a stream. Each element includes the disk storage device, the read thread, and the ring buffer, and is periodically checked by the thread and a flag is set to an element in which a using ratio of the buffer is equal to or less than a predetermined value. When the flag has been set to the other elements and the using ratio of the buffer of the initially set element is equal to or larger than a predetermined value, each thread pauses the reading operation of the data from the disk. The absorption of a fluctuation in the disk and the distribution of the ability according to a load are executed by the buffer and the flag so that the moving image bit stream can be stably outputted.
Abstract: A computer system includes a host computer, a mass storage subsystem and a backup subsystem for backing up information stored on the mass storage subsystem. The mass storage subsystem stores information on a series of tracks. A backup bit map includes a plurality of bits each associated with a respective one of the tracks and indicates the backup status of the track during a backup operation. Initially, during a backup operation, the bits associated with the tracks to be backed up will be set. Generally, the mass storage subsystem transfers information from the track to be backed up in order of the bits in the bit map, and after each track is backed up, it will clear the track's bit. However, when the host is to store information in the mass storage subsystem, it will determine whether the bit associated with the track in which the information is to be stored is set and, if so, enable the mass storage subsystem to back up the track out of turn, and to re-set the track's bit.
Abstract: An apparatus and method for optimizing a non-inclusive hierarchical cache memory system that includes a first and second cache for storing information. The first and second cache are arranged in an hierarchical manner such as a level two and level three cache in a cache system having three levels of cache. The level two and level three cache hold information non-inclusively, while a dual directory holds tags and states that are duplicates of the tags and states held for the level two cache. All snoop requests (snoops) are passed to the dual directory by a snoop queue. The dual directory is used to determine whether a snoop request sent by snoop queue is relevant to the contents of level two cache, avoiding the need to send the snoop request to level two cache if there is a "miss" in the dual directory.
Type:
Grant
Filed:
September 30, 1997
Date of Patent:
June 6, 2000
Assignee:
Sun Microsystems, Inc.
Inventors:
Norman M. Hayes, Belliappa M. Kuttanna, Krishna M. Thatipelli, Ricky C. Hetherington, Fong Pong
Abstract: A data control system for a computer's main memory for efficiently realizing virtualization of list structure data lying across a real memory space and a virtual memory space, including a real memory space having nodes linked by pointers, with the pointers being represented by addresses in the real memory space, a virtual memory space having nodes linked by pointers, with the pointers being represented by addresses in the virtual memory space and addresses to the real memory space, and wherein the nodes in the virtual memory space are referenced to the nodes in the real memory space by indirect pointers, and wherein the list data are shifted between the real memory space and the virtual memory space in the form of list structure units.
Abstract: A data stream accessed in a sequential manner is stored in a plurality of pages in a main memory of a computer system. The pages are contiguous in virtual memory but not in physical memory. The page address translation entry needed to translate the virtual address of a next page into a physical address is embedded in the current page of the data stream. A peripheral processor coupled to the main memory by a bus accesses the data stream by reading the page address translation entry of the first page of the data stream, reading the page addressed by the physical address resulting from the page address translation entry, obtaining the next page address translation entry by extracting it from the current page without performing a read operation of the bus, and reading the next page addressed by the physical address resulting from the extracted page address translation entry.
Abstract: A system incorporating the invention stores data on communicating disk drives in such a manner as to enable recovery of the data in the event a failure of one of the disk drives. The system includes a first disk drive and a second disk drive, both for storing compressed data records in compressed track formats. Each compressed data record includes a CRC value. A cache stores compressed tracks of data records that are read from the first disk drive and a CRC value is calculated for each stored track of compressed data records. That CRC value is appended to the compressed track. A switch is coupled between the cache, a host processor and the second disk drive for dispatching and receiving tracks of compressed data records between the first disk drive and the second disk drive.
Type:
Grant
Filed:
January 23, 1998
Date of Patent:
May 2, 2000
Assignee:
International Business Machines Corporation
Inventors:
Christopher James West, David Glenn Hostetter, Michael Richard Crater, Steven Christopher Fraioli
Abstract: A memory unit and method for using the memory unit in a tightly coupled multiprocessor system having a split model bus is configured to perform an atomic transaction that is carried out in a synchronous mode on the basis of semaphore variables or lock variables. A decoder is included in the memory unit and generates an atomic address space and a conventional address space in an address space of a RAM portion of the memory unit. An identifier unit identifies whether a memory access request is from a bus master and is for an atomic address space or for the conventional and address space. Based on whether the access request is for the atomic address space or the conventional address space controls an atomic transaction mode-shifting unit to shift between an atomic transaction mode of operation and a normal mode of operation.
Abstract: Dynamic random access memory page management is implemented for managing access to row address strobe pages by looking ahead to a next task in determining whether to open or close a memory page after an initial access.
Abstract: Disclosed is a system for isolating errors in a remote copy system. A first controller writes data to a volume in a first direct access storage device (DASD) and maintains a copy of the data in a cache. The first controller transfers the data in the cache to a host system via a first communication line. The host system then transfers the data transferred from the first controller to a second controller via a second communication line. The second controller writes the data transferred from the host system to a volume in a second DASD. A volume pair is comprised of a volume in the first DASD and a volume in the second DASD, wherein for each volume pair, the second DASD volume backs-up data stored in the first DASD volume. If an error related to a volume pair is detected, then the operation of transferring the data in the cache for the volume pair related to the error to the second controller via the host system is suspended. Information on the detected error is written to a first data set.
Type:
Grant
Filed:
December 22, 1997
Date of Patent:
April 18, 2000
Assignee:
International Business Machines Corporation
Inventors:
Robert Nelson Crockett, Ronald Maynard Kern, Gregory Edward McBride
Abstract: A symmetric multiprocessing system with a unified environment and distributed system functions provides unified address space for all functional units in the system while distributing the execution of various system functions over the functional units of the system whereby each functional unit assumes responsibility for its own aspects of these operations. In addition, the system provides improved system bus operation for transfer of data from memory.
Type:
Grant
Filed:
March 10, 1997
Date of Patent:
April 4, 2000
Assignee:
Intel Corporation
Inventors:
William S. Wu, Norman J. Rasmussen, Suresh K. Marisetty, Puthiya K. Nizar
Abstract: A system and method of transferring data in multi-cache systems. The method includes transmitting a first segment of a data stream from a first cache to a second cache. The method also includes retransmitting the first segment of the data stream from the second cache to a main memory. The method further includes generating a second segment of the data stream and completing a transfer of the second segment to the first cache before completing the retransmission of the first segment from the second cache to the main memory.