Abstract: A system and method for providing applications with the ability to access an increased amount of memory. An application maps a specified address range in its (small) virtual memory space to a corresponding number of pages allocated thereto in (relatively large) physical memory. When the application accesses an address in that range in virtual memory, e.g., via a thirty-two-bit address, the mapping information is used to access the corresponding page currently pointed to in the physical memory, allowing access to significantly greater amounts of memory. Fine granularity of access (e.g., one page) is provided, along with fast remapping, cross-process security and coherency across multiple processors in a multiprocessor system. To this end, a memory manager maintains information related to the mapping of virtual addresses to physical pages, in order to verify remap requests and invalidate existing mappings from a virtual address to a previously mapped physical page.
Abstract: System and methods for recovering memory for computer systems are disclosed. A method for recovering physical memory within a computer system having a memory device is disclosed. The method includes detecting an event associated with the system, allocating memory based upon the detected event, and accessing at least one portion of physical memory in association with the allocated memory. Upon accessing physical memory the method further includes deallocating at least one portion of physical memory, wherein the deallocated physical memory is operable to be used in association with the event.
Type:
Grant
Filed:
June 23, 2000
Date of Patent:
October 8, 2002
Assignee:
Dell Products L.P.
Inventors:
Roy W. Stedman, Gary D. Huber, Thomas Vrhel, Jr.
Abstract: An interface device provided on a motherboard, or with a memory control chip set, translates between a controller, intended to communicate with a packet based memory system, and a non-packet based memory system. Communications from a memory controller, intended to directly communicate with a RAMBUS RDRAM memory system, are translated for a memory system which does not comprise RAMBUS RDRAM. The interface device, or integrated circuit, is not located with the memory system. That is, the memory modules do not include the interface circuit. Instead, the interface device is located with the processor motherboard, or with the controller/bridge integrated circuit chip set, such that it is electrically located between a controller and main memory sockets.
Abstract: A method of dynamically switching mapping schemes for cache includes a microprocessor, a first mapping scheme, a second mapping scheme and switching circuitry for switching between the first mapping scheme and the second mapping scheme. The microprocessor is in communication with the cache through the switching circuitry and stores information within the cache using one of the first mapping scheme and second mapping scheme. Also, monitoring circuitry for determining whether one of instructions and load/store operations is using the cache is included. Further, the switching circuitry switches between the first mapping scheme and the second mapping scheme based on which one of instructions and load/store operations is using the cache.
Abstract: In an information processing apparatus involving a cache accessed by INDEX and TAG addresses, accesses to the main memory include many accesses attributable to the local character of referencing and write-back accesses attributable to the replacement of cache contents. Accordingly, high speed accessing requires efficient assignment of the two kinds of accesses to banks of the DRAM. In assigning request addresses from the CPU to different banks of the DRAM, bank addresses of the DRAM and generated by operation of the INDEX field and the TAG field so that local accesses whose INDEX varies and accesses at the time of writing back of which INDEX remains the same but TAG differs can be assigned to different banks. High speed accessing is made possible because accesses to the main memory can be assigned to separate banks.
Abstract: The present invention provides a method and apparatus for providing battery-backed immediate write back cache for an array of disk drives in a computer system. Cooperation between a new replacement controller and a survivor controller is enabled so that write back cache operation can start immediately, and not be dependant on the battery condition in the replacement controller. Protection of the data through a single point of failure is maintained. When a controller fails, a replacement controller is installed and battery state information is exchanged. If any battery backup meets a predetermined threshold, all of the controllers run in the write back cache mode. However, if not one of the battery backups meets a predetermined threshold, all of the controllers run in the write through cache mode. Thus, the system does not need to wait for a replacement controller's battery backup to be reconditioned before the higher speed write back cache is used.
Type:
Grant
Filed:
June 23, 2000
Date of Patent:
August 20, 2002
Assignee:
International Business Machines Corporation
Inventors:
Michael E. Nielson, Thomas E. Richardson
Abstract: A low-latency scratchpad RAM memory system is disclosed. The scratchpad RAM memory system can be accessed in parallel to a primary cache. Parallel access to the scratchpad RAM memory can be designed to be independent of a corresponding cache tag RAM, thereby enabling the scratchpad RAM memory to be sized to any specification, independent of the size of the primary cache data RAMs.
Abstract: A system and method are disclosed which determine in parallel for multiple levels of a multi-level cache whether any one of such multiple levels is capable of satisfying a memory access request. Tags for multiple levels of a multi-level cache are accessed in parallel to determine whether the address for a memory access request is contained within any of the multiple levels. For instance, in a preferred embodiment, the tags for the first level of cache and the tags for the second level of cache are accessed in parallel. Also, additional levels of cache tags up to N levels may be accessed in parallel with the first-level cache tags. Thus, by the end of the access of the first-level cache tags it is known whether a memory access request can be satisfied by the first-level, second-level, or any additional N-levels of cache that are accessed in parallel.
Type:
Grant
Filed:
February 9, 2000
Date of Patent:
July 30, 2002
Assignee:
Hewlett-Packard Company
Inventors:
Terry L Lyon, Eric R DeLano, Dean A. Mulla
Abstract: A processor that accesses a plurality of regions allocated to memory includes: a judging unit for judging which region is accessed based on an access address; an assuming unit for assuming which region is accessed based on the access address, the assuming unit producing an assumption result faster than the judging unit produces a judgement result; an accessing unit for starting access based on the assumption result; a detecting unit for detecting a disagreement between the judgement result and the assumption result; and a control unit for stopping the access that has been started if the detecting unit has detected the disagreement, and controlling the accessing unit to perform another access based on the judgement result.
Type:
Grant
Filed:
June 22, 2000
Date of Patent:
July 23, 2002
Assignee:
Matsushita Electric Industrial Co., Ltd.
Abstract: A method for protecting data recorded on an original data storage medium against unauthorized copying includes receiving data encoded in accordance with an applicable recording standard, and altering a portion of the encoded data such that the altered data are identified as erroneous according to the standard. The data are recorded on the medium, together with ancillary data which are used by a processor in an intended application of the medium to operate upon the altered portion of the data such that the application runs in a manner substantially unaffected by the alteration of the data. Upon unauthorized copying of the data, however, the ancillary data are ineffective in correcting the altered portion of the encoded data, so that the alteration causes a substantially unrecoverable error in an unauthorized copy of the original medium.
Type:
Grant
Filed:
August 9, 1999
Date of Patent:
July 23, 2002
Assignee:
Midbar Tech (1998) Ltd.
Inventors:
Patrice Sinquin, Philippe Selve, Ran Alcalay
Abstract: The basic idea comprised of the present invention is to provide a translation lookaside buffer (TLB) arrangement which advantageously uses two buffers, a small first level TLB1 and a larger second level TLB2. The second level TLB feeds address information to the first level TLB when the desired virtual address is not contained in the first level TLB. According to the invention the second level TLB is structured advantageously comprising two n-way set-associative sub-units of which one, a higher level unit covers some higher level address translation levels and the other one, a lower level unit, covers some lower level translation level. According to the present invention, some address information holds some number of middle level virtual address (MLVA) bits, i.e., 8 bits, for example, being able to serve as an index address covering the address range of the higher level sub-unit.
Type:
Grant
Filed:
February 11, 2000
Date of Patent:
July 9, 2002
Assignee:
International Business Machines Corporation
Inventors:
Ute Gaertner, John MacDougall, Erwin Pfeffer, Kerstin Schelm
Abstract: An information storage controller comprises two memory groups. One memory group is used for external communication, while the other memory group communicates with a plurality of magnetic disks, so that work in each memory group is performed in parallel. Information is transmitted between the external equipment and the magnetic disk by a switch means. Further, the information storage controller comprises a redundant information generation circuit for reading information from a plurality of memories with an exclusive circuit which simultaneously generates redundant information and verifies and recovers data. Thus, a high speed information storage controller connected to a plurality of magnetic disks can be provided.
Type:
Grant
Filed:
February 22, 1996
Date of Patent:
July 9, 2002
Assignee:
Matsushita Electric Industrial Co., Ltd.
Abstract: This invention is a data processing system including a central processing unit executing program instructions to manipulate data, at least one level one cache, a level two unified cache, a directly addressable memory and a direct memory access unit adapted for connection to an external memory. A superscalar memory transfer controller schedules plural non-interfering memory movements to and from the level two unified cache and the directly addressable memory each memory cycle in accordance with a predetermined priority of operation. The level one cache preferably includes a level one instruction cache and a level one data cache. The superscalar memory transfer controller is capable of scheduling plural cache tag memory read accesses and one cache tag memory write access in a single memory cycle. The superscalar memory transfer controller is capable of scheduling plural of cache access state machines in a single memory cycle.
Type:
Grant
Filed:
June 26, 2000
Date of Patent:
June 18, 2002
Assignee:
Texas Instruments Incorporated
Inventors:
Charles L. Fuoco, Sanjive Agarwala, David A. Comisky, Christopher L. Mobley
Abstract: A configuration managing part stores the number of storage devices and respective identifying information. An empty area managing part stores empty-block addresses for each storage device. A destination determining part recognizes the number of storage devices with the information stored by the configuration managing part, and fetches one or more empty-block addresses for specifying empty storage areas included in each storage device from the empty area managing part. The destination determining part then repeatedly rearranges the empty-block addresses at random to determine destination of data.
Type:
Grant
Filed:
November 8, 1999
Date of Patent:
June 18, 2002
Assignee:
Matsushita Electric Industrial Co., Ltd.
Inventors:
Yukiko Ito, Tsutomu Tanaka, Masaaki Tamai, Shinzo Doi
Abstract: A method and apparatus for managing cache memory is described. The invention improves the efficiency of cache usage by monitoring parameters of multiple caches, for example, empty space in each cache or the number of cache misses of each cache, and selectively assigns elements of data or results to a particular cache based on the monitored parameters. Embodiments of the invention can track absolute values of the monitored parameters or can track values of the monitored parameters of one cache relative to one or more other caches. Embodiments of the invention may be scaled to accommodate larger numbers of caches at a particular cache level and may be implemented among multiple cache levels.
Abstract: A memory controller detects an approaching end of a currently open page for an access operation for a particular data stream. The memory controller, in response to detecting the approaching end of the currently open page and if the particular data stream is of a predetermined type, such as an isochronous data stream, the memory controller speculatively opens a next page in the memory.
Type:
Grant
Filed:
May 11, 2000
Date of Patent:
April 30, 2002
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Geoffrey S. S. Strongin, Qadeer Ahmad Qureshi
Abstract: The present invention provides a shared instruction cache for multiple processors. In one embodiment, an apparatus for a microprocessor includes a shared instruction cache for a first processor and a second processor, and a first register index base for the first processor and a second register index base for the second processor. The apparatus also includes a first memory address base for the first processor and a second memory address base for the second processor. This embodiment allows for segmentation of register files and main memory based on which processor is executing a particular instruction (e.g., an instruction that involves a register access and a memory access).
Abstract: In the data processing apparatus and file management method therefor of the present invention, when each piece of data in one of plural blocks is recorded as a data portion on a disk medium, continuous ID numbers are assigned to the individual continuous blocks, each of the assigned ID numbers is stored in the ID portion of sub-code portion, address information indicating the recording position of the head of at least the next block is stored in the link portion of the sub-code portion, and the sub-code portion is recorded together with the data portion of the block on the disk medium.
Type:
Grant
Filed:
June 22, 1999
Date of Patent:
April 23, 2002
Assignee:
Matsushita Electric Industrial Co., Ltd.
Abstract: A system and process for invalidating addresses within a cache memory system is described. The system and process allow a set-associative cache memory system in a computer to simultaneously analyze all sets corresponding to a particular address to determine whether the data at a particular address needs to be written back to the computer's main memory. A set of flags is stored with each address in the cache memory so that the flags can be scrutinized to determine whether the data stored in that set is valid, but not modified.
Abstract: A method of managing memory includes providing a memory (190) partitioned into a memory tree having N memory levels and different numbers of memory nodes (100, 110, 111, 120, 121, 122, 123, 130, 131, 132, 133, 134, 135, 136, 137) at each of the memory levels, providing a memory request, determining a memory request size, recursively searching partially-full subtrees within the memory tree to identify an empty one of the memory nodes minimizing fragmentation within the memory tree, and allocating the memory request to the empty one of the memory nodes.