Abstract: A system and method of storing data using write once read many (WORM) protection including using a hardware storage device to write data to a medium are provided. The method further includes establishing a write once read many (WORM) module external to the hardware storage device. Data blocks are received at the module, block numbers are specified with the module, and data is output from the module to write to the storage medium at specified block numbers. The last specified block number or all specified block numbers depending on the type of media access are stored so that the external WORM module prevents future writing of data to these specified or already used block numbers.
Type:
Grant
Filed:
December 27, 2001
Date of Patent:
September 2, 2003
Assignee:
Storage Technology Corporation
Inventors:
Jacques Debiez, James P. Hughes, Axelle Apvrille
Abstract: A method of handling a write operation in a multiprocessor computer system wherein each processing unit has a respective cache, by determining that a new value for a store instruction is the same as a current value already contained in the memory hierarchy, and discarding the store instruction without issuing any associated cache operation in response to this determination. When a store hit occurs, the current value is retrieved from the local cache. When a store miss occurs, the current value is retrieved from a remote cache by issuing a read request. The comparison may be performed using a portion of the cache line which is less than a granule size of the cache line. A store gathering queue can be use to collect pending store instructions that are directed to different portions of the same cache line.
Type:
Grant
Filed:
February 12, 2001
Date of Patent:
September 2, 2003
Assignee:
International Business Machines Corporation
Inventors:
Ravi Kumar Arimilli, John Steven Dodson, Guy Lynn Guthrie
Abstract: An archival system of the present invention includes a controller and multiple storage mediums that are used for long-term storage of vast amounts of digital data. The archival system verifies that the original digital data remains intact and error-free, byte-by-byte, through time. The archival system makes it possible to migrate the digital data files onto new storage media, correct byte-by-byte to the original files, as new storage media and machines are developed and proven. The system also allows data to be accessed that is then-currently needed, while the storage of the data continues on in time, undisturbed and uncorrupted. The archival system enhances the security of the archived data through physical movement of duplicated archival data storage mediums to remote locations. This invention for long-term, error free storage of digital files solves (provides the solution for) the problems of backward-read compatibility and the uncertainty of storage media failure.
Abstract: A data processing apparatus (10, FIG. 1) has a direct access non-volatile storage device (103) on which log records are stored in one or more log files. The processor (101) allocates storage for the log based on possible future requirements. The processor sets the maximum amount of new data that can be written to the log before a key-point operation is performed. When the maximum is reached a key-point is performed. As a result the maximum possible size of the active data written as part of the next key-point can be calculated and storage is allocated accordingly. Should storage become restricted such that the required storage cannot be allocated the data processing apparatus runs in a restricted mode during which the records that are written to the log are concerned with reducing the size of the active data and therefore the next key-point.
Type:
Grant
Filed:
June 27, 2001
Date of Patent:
August 5, 2003
Assignee:
International Business Machines Corporation
Abstract: A method of operating a storage system comprising a main memory and a cache memory structured in address-related lines, in which cache memory can be loaded with data from the main memory and be read out by a processor as required. During the processor's access to data of a certain address in the cache memory, at which address certain data from the main memory which has a corresponding address is stored, a test is made to determine whether sequential data s stored at the next address in the cache memory; and this sequential data, if unavailable, can be loaded from the main memory in the cache memory via a prefetch, the latter only taking place when the processor accesses a predefined line section lying in a line.
Type:
Grant
Filed:
August 17, 2000
Date of Patent:
July 15, 2003
Assignee:
Koninklijke Philips Electronics N.V.
Inventors:
Axel Hertwig, Harald Bauer, Urs Fawer, Paul Lippens
Abstract: Data is stored in a memory in a manner which eliminates dead time which occurs when the number of words in a page which are read-out are insufficient to provide enough time for simultaneously opening the next page. If the length of a frame being stored in memory is not an exact integral multiple of words in a page, a penultimate (or earlier) page is written with fewer words than the page can hold. This allows additional words to be placed into the last page, sufficient to provide every page used for storing a frame at least a number of words equal to the number of clock cycles needed for opening a next page.
Abstract: A memory access circuit includes a memory and a slot for receiving therein a memory card having a controller. Address, CS (chip select) and We (output enable) signals different in active period from one another are supplied to the controller. Due to this, ID data signals are read from the memory. The CPU determines for properness on the read-out ID data signals. Specifically, when the common data contained in the ID data signal exhibits a predetermined value, the ID data signal is determined proper. However, when the common data does not exhibit the predetermined value, the ID data signal is determined improper. The CPU determines as an optimal active period a shortest active period among the active periods that proper ID data signals have been read out.
Abstract: A method of maintaining coherency in a cache hierarchy of a processing unit of a computer system, wherein the upper level (L1) cache includes a split instruction/data cache. In one implementation, the L1 data cache is store-through, and each processing unit has a lower level (L2) cache. When the lower level cache receives a cache operation requiring invalidation of a program instruction in the L1 instruction cache (i.e., a store operation or a snooped kill), the L2 cache sends an invalidation transaction (e.g., icbi) to the instruction cache. The L2 cache is fully inclusive of both instructions and data. In another implementation, the L1 data cache is write-back, and a store address queue in the processor core is used to continually propagate pipelined address sequences to the lower levels of the memory hierarchy, i.e., to an L2 cache or, if there is no L2 cache, then to the system bus. If there is no L2 cache, then the cache operations may be snooped directly against the L1 instruction cache.
Type:
Grant
Filed:
February 12, 2001
Date of Patent:
June 3, 2003
Assignee:
International Business Machines Corporation
Inventors:
Ravi Kumar Arimilli, John Steven Dodson, Guy Lynn Guthrie
Abstract: A digital data processing system is provided which includes a digital data processor, a cache memory having a tag RAM and a data RAM, and a controller for controlling accesses to the cache memory. The controller stores state information on access type, operation mode and cache hit/miss associated with the most recent access to the tag RAM, and controls a current access to the tag RAM just after the preceding access based on the state information and a portion of a set field of a main memory address for the second access. The controller determines whether the current access is applied to the same cache line that was accessed in the first access based on the state information and a portion of a set field of the main memory address for the second access, and allows the current access to be skipped when the current access is applied to the same cache line that was accessed in the preceding access.
Abstract: A data storage system is constructed to rapidly respond to a backup request by streaming backup data from primary storage to tape. It is desirable to permit the data to be removed from the primary storage at a faster rate than it can be written to tape. The backup data is buffered in a memory buffer, and when the memory buffer becomes substantially full, a portion of the backup data is buffered in disk storage. When the memory buffer becomes substantially empty, the portion of the backup data in the disk storage is written to tape. In a preferred embodiment, the memory buffer is in random access memory of a data mover computer that transfers the backup data from primary storage to a tape library unit. When the memory buffer becomes full, the data mover stores the overflow in a cached disk storage subsystem. When the memory buffer becomes empty, the data mover retrieves the overflow from the cached disk storage subsystem and transmits the overflow to the tape library unit.
Abstract: An apparatus for associating cache memories with processors within a multiprocessor data processing system is disclosed. The multiprocessor data processing system includes multiple processing units and multiple cache memories. Each of the cache memories includes a cache memory controller, and each cache memory controller includes a mode register. Each mode register has multiple processing unit fields, and each of the processing unit fields is associated with one of the processing units for indicating whether or not data from an associated processing unit should be cached by a cache memory associated to a corresponding cache memory controller.
Type:
Grant
Filed:
December 19, 2000
Date of Patent:
March 11, 2003
Assignee:
International Business Machines Corporation
Inventors:
Ravi Kumar Arimilli, James Stephen Fields, Jr., Sanjeev Ghai, Jody Bern Joyner