No-cache Flags Patents (Class 711/139)
  • Patent number: 6934812
    Abstract: A media player and a method for operating a media player are disclosed. A media program is able to substantially immediately begin playing after a media play selection has been made. Through intelligent operation, the media program is able to start playing even before the media program has been substantially or completely loaded from disk storage into semiconductor memory (i.e., cache memory). Additionally, the media program can be loaded into semiconductor memory through use of a background process without disturbing the playing of the media program. Further, if desired, the disk storage is able to be aggressively “powered off” when not being accessed, thereby enhancing battery life when being battery-powered.
    Type: Grant
    Filed: April 5, 2002
    Date of Patent: August 23, 2005
    Assignee: Apple Computer, Inc.
    Inventors: Jeffrey L. Robbin, Ned K. Holbrook, Steven Bollinger
  • Patent number: 6865650
    Abstract: A system and method for storing data, the system having one or more storage devices, caches data from a sender into a first random-access structure located in a first cache level, caches data from the first cache level into a log structure located in a second cache level, and stores data from CL into a second random-access structure located in a storage level, wherein CL is the first cache level or the second cache level. In further embodiments of the invention, the second cache level caches in the log structure parity data for the data cached in the log structure. In a still further embodiment of the invention, the storage level stores in the second random-access structure parity data for the data stored in the second random-access structure.
    Type: Grant
    Filed: September 29, 2000
    Date of Patent: March 8, 2005
    Assignee: EMC Corporation
    Inventors: Steve Morley, Robert C. Solomon, David DesRoches, John Percy
  • Publication number: 20040186961
    Abstract: The present invention provides a technique of controlling cache operation on a node device in a computer system that enables transmission and receipt of data between clients and a storage device via the node device. In accordance with a first control method, the data stored in the storage device includes attribute data, as to whether or not the data is cacheable. This application enables the node device to relay non-cacheable data without process of the cache. In accordance with a second control method, the node device encrypts the data when caching the data in the disk. In accordance with a third control method, non-cacheable data is transmitted and received directly without going through the node device. These applications enable the cache in the node device to be restricted, and thereby ensure security.
    Type: Application
    Filed: September 17, 2003
    Publication date: September 23, 2004
    Inventors: Shinji Kimura, Satoshi Oshima, Hisashi Hashimoto
  • Publication number: 20040039886
    Abstract: An apparatus, program product and method utilize a cache payback parameter for selectively and dynamically disabling caching for potentially cacheable operations performed in connection with a memory. The cache payback parameter is tracked concurrently with the performance of a plurality of cacheable operations on a memory, and is used to determine the effectiveness, or potential payback, of caching in a particular implementation or environment. The selective disabling of caching, in turn, is applied at least to future cacheable operations based upon a determination that the cache payback parameter meets a caching disable threshold.
    Type: Application
    Filed: August 26, 2002
    Publication date: February 26, 2004
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Armin Harold Christofferson, Leon Edward Gregg, James Lawrence Tilbury
  • Patent number: 6681297
    Abstract: A digital system is provided with a several processors (1302), a shared level two (L2) cache (1300) having several segments per entry with associated tags, and a level three (L3) physical memory. Each tag entry includes a task-ID qualifier field and a resource ID qualifier field. Data is loaded into various lines in the cache in response to cache access requests when a given cache access request misses. After loading data into the cache in response to a miss, a tag associated with the data line is set to a valid state. In addition to setting a tag to a valid state, qualifier values are stored in qualifier fields in the tag. Each qualifier value specifies a usage characteristic of data stored in an associated data line of the cache, such as a task ID. A miss counter (532) counts each miss and a monitoring task (1311) determines a miss rate for memory requests. If a selected miss rate threshold value is exceeded, the digital system is reconfigured in order to reduce the miss rate.
    Type: Grant
    Filed: August 17, 2001
    Date of Patent: January 20, 2004
    Assignee: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Dominique D'Inverno, Serge Lasserre
  • Patent number: 6574708
    Abstract: A cache is coupled to receive an access which includes a cache allocate indication. If the access is a miss in the cache, the cache either allocates a cache block storage location to store the cache block addressed by the access or does not allocate a cache block storage location in response to the cache allocate indication. In one implementation, the cache is coupled to an interconnect with one or more agents. In such an implementation, the cache accesses may be performed in response to transactions on the interconnect, and the transactions include the cache allocate indication. Thus, the source of a cache access specifies whether or not to allocate a cache block storage location in response to a miss by the cache access. The source may use a variety of mechanisms for generating the cache allocate indication.
    Type: Grant
    Filed: May 18, 2001
    Date of Patent: June 3, 2003
    Assignee: Broadcom Corporation
    Inventors: Mark D. Hayter, Joseph B. Rowlands
  • Publication number: 20030097394
    Abstract: A digital system and method of operation is provided in which several processors (440, 450) are connected to a shared memory resource (460). Translation lookaside buffers (TLB) (400, 402) are connected to receive a request address (404a-n) from each respective processor. Each TLB has a set of entries that correspond to pages of address space. Each entry provides a set of task memory attributes (TMA) (412a-n) for the associated page of address space. Task memory attributes are defined by a task control block associated with a currently executing task. For each memory transfer request, the TLB accesses an entry corresponding to the request address and provides a translated physical memory address and a task memory attribute value associated with that requested address space page. Functional circuitry (470) performs pre/post-processing on data that is being transferred between a processor and the memory in accordance with the task memory attribute value provided by the TLB with each memory transfer request.
    Type: Application
    Filed: May 29, 2002
    Publication date: May 22, 2003
    Inventors: Gerard Chauvel, Serge Lasserre, Edward E. Ferguson
  • Patent number: 6415362
    Abstract: A method and system for performing write-through store operations of valid data of varying sizes in a data processing system, where the data processing system includes multiple processors that are coupled to an interconnect through a memory hierarchy, where the memory hierarchy includes multiple levels of cache, where at least one lower level of cache of the multiple of levels of cache requires store operations of all valid data of at least a predetermined size. First, it is determined whether or not a write-through store operation is a cache hit in a higher level of cache of the multiple levels of cache. In response to a determination that cache hit has occurred in the higher level of cache, the write-through store operation is merged with data read from the higher level of cache to provide a merged write-through operation of all valid data of at least the predetermined size to a lower level of cache.
    Type: Grant
    Filed: April 29, 1999
    Date of Patent: July 2, 2002
    Assignees: International Business Machines Corporation, Motorola, Inc.
    Inventors: James Nolan Hardage, Alexander Edward Okpisz, Thomas Albert Petersen
  • Patent number: 6415360
    Abstract: A processor employs an SMC check apparatus. The SMC check apparatus may minimize the number of explicit SMC checks performed for non-cacheable stores. Cacheable stores may be handled using any suitable mechanism. For non-cacheable stores, the processor tracks whether or not the in-flight instructions are cached. Upon encountering a non-cacheable store, the processor inhibits an SMC check if the in-flight instructions are cached. Since, for performance reasons, the code stream is often cached, non-cacheable stores may frequently be able to skip an explicit, complex, and time consuming SMC check. Performance of non-cacheable stores (and memory throughput overall) may be increased. The handling of non-cacheable stores as described herein may be particularly beneficial to video data manipulations, which may frequently be of a non-cacheable memory type and which may be important to the overall performance of a computer system.
    Type: Grant
    Filed: May 18, 1999
    Date of Patent: July 2, 2002
    Assignee: Advanced Micro Devices, Inc.
    Inventors: William Alexander Hughes, William Kurt Lewchuk, Gerald D. Zuraski, Jr.
  • Patent number: 6397382
    Abstract: A method and system of monitoring code as it is executed by a target processor is provided for debugging, etc. Standardized software code function preamble and postamble instructions are dynamically replaced with instructions that will generate a predetermined exception. The exception generates a branch to a conventional exception vector table. An exception routine is inserted into the vector table, and includes instruction(s) to disable the data and/or address caches. Subsequent instructions in the vector table execute the replaced preamble instruction and, with or without re-enabling the cache, branch back to the address of the program code immediately following the faulted preamble address. Instructions of the function executed while cache is disabled are executed on the bus where they are visible, as opposed to within cache.
    Type: Grant
    Filed: May 12, 1999
    Date of Patent: May 28, 2002
    Assignee: Wind River Systems, Inc.
    Inventor: Peter S. Dawson
  • Patent number: 6389527
    Abstract: The present invention comprises a LSU which executes instructions relating to load/store. The LSU includes a DCACHE which temporarily stores data read from and written to the external memory, an SPRAM used to specific purposes other than cache, and an address generator generating virtual addresses for access to the DCACHE and the SPRAM. Because the SPRAM can load and store data by a pipeline of the LSU and exchanges data with an external memory through a DMA transfer, the present invention is especially available to high-speedily process a large amount of data such as the image data. Because the LSU can access the SPRAM with the same latency as that of the DCACHE, after data being stored in the external memory is transferred to the SPRAM, the processor can access the SPRAM in order to perform data process, and it is possible to process a large amount of data with shorter time than time necessary to directly access an external memory.
    Type: Grant
    Filed: February 8, 1999
    Date of Patent: May 14, 2002
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Michael Raam, Toru Utsumi, Takeki Osanai, Kamran Malik
  • Patent number: 6360300
    Abstract: A system and method for organizing compressed data and uncompressed data in a storage system. The method and system include a compressor for compressing a data block into a compressed data block, wherein N represents a compression ratio. The storage disk includes a first disk partition having N slots for storing compressed data, and a second disk partition for storing uncompressed data. A portion of the N slots in the first partition include address pointers for pointing to locations in the second disk partition containing the uncompressed data.
    Type: Grant
    Filed: August 31, 1999
    Date of Patent: March 19, 2002
    Assignee: International Business Machines Corporation
    Inventors: Brian Jeffrey Corcoran, Shanker Singh
  • Patent number: 6345320
    Abstract: A system includes a main-memory unit, an input/output-control unit which performs a write operation with respect to the main-memory unit by way of direct memory access, and a central-control unit which operates based on information stored in the main-memory unit, the central-control unit including a cache memory which temporarily stores some of the information, and a DMA buffer which temporarily stores a DMA address indicated by the direct memory access.
    Type: Grant
    Filed: October 1, 1998
    Date of Patent: February 5, 2002
    Assignee: Fujitsu Limited
    Inventors: Shigeaki Kawamata, Atsushi Yoshioka
  • Patent number: 6343345
    Abstract: A cache blocking mechanism ensures that transient data is not stored in a secondary cache of a router by managing a designated set of buffers in a main memory of the router. The mechanism defines a window of virtual addresses that map to predetermined physical memory addresses associated with the set of buffers; in the illustrative embodiment, only transient data may be stored in these buffers. The mechanism further blocks write requests directed to these predetermined memory buffers from propagating to the secondary cache, thereby precluding storage of transient data in the cache.
    Type: Grant
    Filed: July 13, 2000
    Date of Patent: January 29, 2002
    Assignee: Cisco Technology, Inc.
    Inventors: Stephen C. Hilla, Jonathan Rosen
  • Patent number: 6338119
    Abstract: A method and apparatus for improving direct memory access and cache performance utilizing a special Input/Output or “I/O” page, defined as having a large size (e.g., 4 Kilobytes or 4 Kb), but with distinctive cache line characteristics. For Direct Memory Access (DMA) reads, the first cache line in the I/O page may be accessed, by a Peripheral Component Interconnect (PCI) Host Bridge, as a cacheable read and all other lines are non-cacheable access (DMA Read with no intent to cache). For DMA writes, the PCI Host Bridge accesses all cache lines as cacheable. The PCI Host Bridge maintains a cache snoop granularity of the I/O page size for data, which means that if the Host Bridge detects a store (invalidate) type system bus operation on any cache line within an I/O page, cached data within that page is invalidated, the Level 1 and Level 2 ((L1/L2) caches continue to treat all cache lines in this page as cacheable).
    Type: Grant
    Filed: March 31, 1999
    Date of Patent: January 8, 2002
    Assignee: International Business Machines Corporation
    Inventors: Gary Dean Anderson, Ronald Xavier Arroyo, Bradly George Frey, Guy Lynn Guthrie
  • Patent number: 6304944
    Abstract: A method and apparatus for improving the efficiency of the cacheability (and other attribute) determination by making the information from the region register available during linear to physical address translation, rather than serially upon completion of the address translation. Address range comparisons are made when the TLB is loaded. That is, attribute information stored in a region register or registers is compared with physical addresses corresponding to translations loaded in a translation lookaside buffer reload operation. The present invention thus advantageously removes the region register compare operation from the path to memory.
    Type: Grant
    Filed: November 27, 2000
    Date of Patent: October 16, 2001
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Michael Pedneau
  • Patent number: 6272599
    Abstract: An apparatus and method using selectively controlled cache/no-cache bits for improving the real-time performance of applications run on computers having cache memory by controlling the caching of different regions of address space thereby reducing thrashing or other forms of interference in the cache memory and improving the WCET performance of such applications.
    Type: Grant
    Filed: October 30, 1998
    Date of Patent: August 7, 2001
    Assignee: Lucent Technologies Inc.
    Inventor: G. N. Prasanna
  • Patent number: 6256710
    Abstract: Cache memory is managed to update the data stored in the cache regardless of whether the address being operated upon is designated as cache inhibited. In this way, the contents of the cache are coherent with main memory so that when the processor redesignates a noncacheable range of addresses to be cacheable, the cache does not need to be flushed. Read operations follow cache inhibit faithfully.
    Type: Grant
    Filed: April 28, 1995
    Date of Patent: July 3, 2001
    Assignee: Apple Computer, Inc.
    Inventors: Farid A. Yazdy, Michael Dhuey
  • Patent number: 6189074
    Abstract: A method and apparatus for improving the efficiency of the cacheability (and other attribute) determination by making the information from the region register available during linear to physical address translation, rather than serially upon completion of the address translation. Address range comparisons are made when the TLB is loaded. That is, attribute information stored in a region register or registers is compared with physical addresses corresponding to translations loaded in a translation lookaside buffer reload operation. The present invention thus advantageously removes the region register compare operation from the path to memory.
    Type: Grant
    Filed: March 19, 1997
    Date of Patent: February 13, 2001
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Michael Pedneau
  • Patent number: 6170030
    Abstract: An apparatus and method for restreaming data that has been queued in a bus bridging device. Data received via a first bus is stored in a first queue. A first portion of the data is output from the first queue onto a second bus while a second portion of the data remains in the first queue. In response to another data value being transferred from the first bus to the second bus before the second portion of the data is output to the second bus, the second portion of the data in the first queue is invalidated.
    Type: Grant
    Filed: January 23, 1998
    Date of Patent: January 2, 2001
    Assignee: Intel Corporation
    Inventor: Michael D. Bell
  • Patent number: 6167488
    Abstract: The present invention provides a stack management unit including a stack cache to accelerate data retrieval from a stack and data storage into the stack. In one embodiment, the stack management unit includes a stack cache, a dribble manager unit, and a stack control unit. The dribble manager unit maintains a cached stack portion, typically a top portion of the stack in the stack cache. The stack cache includes a stack cache memory circuit, one or more read ports, and one or more write ports. The stack management unit also includes an overflow/underflow unit. The overflow/underflow unit detects and resolves overflow conditions and underflow conditions. If an overflow occurs the overflow/underflow unit suspends operation of the stack cache and causes the spill control unit to store the valid data words in the slow memory unit or data cache unit. After the valid data in the stack cache are saved, the overflow/underflow unit equates the cache bottom pointer to the optop pointer.
    Type: Grant
    Filed: March 31, 1997
    Date of Patent: December 26, 2000
    Assignee: Sun Microsystems, Inc.
    Inventor: Sailendra Koppala
  • Patent number: 6138216
    Abstract: A method is described of managing memory in a microprocessor system comprising two or more processors (40, 42). Each processor (40, 42) has a cache memory (44, 46) and the system has a system memory (48) divided into pages subdivided into blocks. The method is concerned with managing the system memory (48) identifying areas thereof as being "cacheable", "non-cacheable" or "free". Safeguards are provided to ensure that blocks of system memory (48) cannot be cached by two different processors (40, 42) simultaneously.
    Type: Grant
    Filed: January 21, 1998
    Date of Patent: October 24, 2000
    Assignee: nCipher Corporation Limited
    Inventor: Ian Nigel Harvey
  • Patent number: 6105112
    Abstract: A method is disclosed of managing architectural operations in a computer system whose architecture includes components having varying coherency granule sizes. A queue is provided for receiving as entries a plurality of the architectural operations, the entries of the queue are compared with a new architectural operation to determine if the new architectural operation is redundant with any of the entries. If the new architectural operation is not redundant with any of the entries, it is loaded in the queue. The computer system may include a cache having a processor granularity size and a system bus granularity size which is larger than the processor granularity size, and the architectural operations are cache instructions. The comparison may be performed in an associative manner based on the varying coherency granule sizes.
    Type: Grant
    Filed: April 14, 1997
    Date of Patent: August 15, 2000
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis, Derek Edward Williams
  • Patent number: 6021470
    Abstract: A method for selectively caching data in a computer network. Initially, data objects which are anticipated as being accessed only once or seldomly accessed are designated as being exempt from being cached. When a read request is generated, the cache controller reads the requested data object from the cache memory if it currently resides in the cache memory. However, if the requested data object cannot be found in the cache memory, it is read from a mass storage device. Thereupon, the cache controller determines whether the requested data object is to be cached or is exempt from being cached. If the data object is exempt from being cached, it is loaded directly into a local memory and is not stored in the cache. This provides improved cache utilization because only objects that are used multiple times are entered in the cache. Furthermore, processing overhead is minimized by reducing unnecessary cache insertion and purging operations.
    Type: Grant
    Filed: March 17, 1997
    Date of Patent: February 1, 2000
    Assignee: Oracle Corporation
    Inventors: Richard Frank, Gopalan Arun, Richard Anderson, Rabah Mediouni, Stephen Klein
  • Patent number: 5991819
    Abstract: A symmetric multiprocessor system constructed from industry standard commodity components together with an advanced dual-ported memory controller. The multiprocessor system comprises a processor bus; up to four Intel Pentium.RTM. Pro processors connected to the processor bus; an I/O bus; a system memory; and a dual-ported memory controller connected to the system memory, the dual ported memory controller having a first port connected to the processor bus to manage processor to system memory transactions and a second port connected to the I/O bus to manage I/O transactions. Furthermore, two such systems can be connected together through a common I/O bus, thereby creating an eight-processor Pentium.RTM. Pro processor SMP system.
    Type: Grant
    Filed: December 3, 1996
    Date of Patent: November 23, 1999
    Assignee: Intel Corporation
    Inventor: Gene F. Young
  • Patent number: 5940858
    Abstract: For use in an x86-compatible processor having a cache, a circuit and method for setting a size of the cache and a computer system employing the circuit or the method. In one embodiment, the circuit includes: (1) multiple access circuitry dividing the cache into separate physically-addressable sectors and (2) sector disabling circuitry, coupled to the cache, that selectively allows at least one of the sectors to be disabled to decrease the size of the cache.
    Type: Grant
    Filed: May 30, 1997
    Date of Patent: August 17, 1999
    Assignee: National Semiconductor Corporation
    Inventor: Daniel W. Green
  • Patent number: 5933844
    Abstract: In a computer system comprising a CPU, a cache memory and a main memory wherein the cache memory is virtually addressed, and some of the virtual addresses are alias address to each other, a cache memory controller comprising a cache control logic, a cache tag array, a memory management unit, and an alias detection logic is provided. The cache control logic skips flushing of a cache line if the cache line is corresponding to a memory block in a non-cacheable physical memory page, thereby avoiding unnecessary flushes and allowing the CPU to update the cache memory and the main memory using an improved write through and no write allocate approach that reduces cache flushes.
    Type: Grant
    Filed: April 25, 1994
    Date of Patent: August 3, 1999
    Assignee: Sun Microsystems, Inc.
    Inventor: Mark Young
  • Patent number: 5893150
    Abstract: An efficient cache allocation scheme is provided for both uniprocessor and multiprocessor computer systems having at least one cache. In one embodiment, upon the detection of a cache miss, a determination of whether the cache miss is "avoidable" is made. In other words, would the present cache miss have occurred if the data had been cached previously and if the data had remained in the cache. One example of an avoidable cache miss in a multiprocessor system having a distributed memory architecture is an excess cache miss. An excess cache miss is either a capacity miss or a conflict miss. A capacity miss is caused by the insufficient size of the cache. A conflict miss is caused by insufficient depth in the associativity of the cache. The determination of the excess cache miss involves tracking read and write requests for data by the various processors and storing some record of the read/write request history in a table or linked list. Data is cached only after an avoidable cache miss has occurred.
    Type: Grant
    Filed: July 1, 1996
    Date of Patent: April 6, 1999
    Assignee: Sun Microsystems, Inc.
    Inventors: Erik E. Hagersten, Mark D. Hill
  • Patent number: 5890216
    Abstract: In a computer system, a multi-port bus controller interposed between a CPU, system memory, and an expansion bus detects when a CPU access is to non-cacheable address space and begins a bus cycle to access the data before receiving a "miss" from a cache coupled to the CPU. By detecting non-cacheable address space independently and in parallel with the cache miss determination, the multi-port bus controller saves from one to three clock cycles in each bus cycle that accesses non-cacheable address space.
    Type: Grant
    Filed: August 28, 1997
    Date of Patent: March 30, 1999
    Assignee: International Business Machines Corporation
    Inventors: John E. Derrick, Christopher M. Herring
  • Patent number: 5835942
    Abstract: A cache is distributed among processors in a multiple processor system with no shared memory to maintain the cached data. Each processor maintains a cache which identifies the opened files being cached, the blocks of each file which are cached and the state of caching for each file. The state of each opened file is one of "no-caching", "read-caching" and "read/write caching". So long as only one processor opens a file, and opens it for read/write access, that processor is allowed to do read/write caching on the file. When a processor opens a file for read access, that processor is allowed to do read caching, unless another processor has the file open for read/write access. After the last processor having read/write access to a file closes the file, the disk system upgrades the cache state for the file.
    Type: Grant
    Filed: January 21, 1997
    Date of Patent: November 10, 1998
    Assignee: Tandem Computers Incorporated
    Inventor: Franco Putzolu
  • Patent number: 5829025
    Abstract: A computer system and method in which allocation of a cache memory is managed by utilizing a locality hint value included within an instruction. When a processor accesses a memory for transfer of data between the processor and the memory, that access can be allocated or not allocated in the cache memory. The locality hint included within the instruction controls if the cache allocation is to be made. When a plurality of cache memories are present, they are arranged into a cache hierarchy and a locality value is assigned to each level of the cache hierarchy where allocation control is desired. The locality hint may be used to identify a lowest level where management of cache avocation is desired and cache memory is allocated at that level and any higher level(s). The locality hint value is based on spatial and/or temporal locality for the data associated with the access. Data is recognized at each cache hierarchy level depending on the attributes associated with the data at a particular level.
    Type: Grant
    Filed: December 17, 1996
    Date of Patent: October 27, 1998
    Assignee: Intel Corporation
    Inventor: Millind Mittal
  • Patent number: 5822762
    Abstract: An information processing device includes a central processing unit, a cache memory unit and first and second decision circuits. The first decision circuit identifies one of partitioned address areas to be accessed before the central processing unit accesses the cache memory unit. The second decision circuit determines whether the above one of the partitioned address areas is a cachable area or a non-cachable area before address tag data is referred to in the cache memory unit.
    Type: Grant
    Filed: September 26, 1995
    Date of Patent: October 13, 1998
    Assignee: Fujitsu Limited
    Inventors: Syuji Nishida, Seiji Suetake, Shunsuke Kamijo, Kenji Furuya
  • Patent number: 5819079
    Abstract: A computer system includes an instruction prefetching mechanism that detects whether an instruction to be prefetched is located in a region of memory that is uncacheable. To perform an instruction prefetch, an instruction fetch unit (IFU) receives an instruction pointer indicating a memory location containing an instruction to be prefetched. The instruction pointer may be provided by a branch target buffer (BTB) as a result of a branch prediction, or by auxiliary branch prediction mechanisms, or actual execution. The IFU accesses an instruction translation look-aside buffer (ITLB) to determine both the physical address corresponding to the linear address of the instruction pointer and to determine an associated memory type stored therein. If the memory type indicates an uncacheable memory location, the IFU waits until all previous executed instructions have completed. The IFU does this by inserting a "permission-to-fetch" instruction, and then stalling.
    Type: Grant
    Filed: September 11, 1995
    Date of Patent: October 6, 1998
    Assignee: Intel Corporation
    Inventors: Andrew F. Glew, Ashwani Gupta
  • Patent number: 5765194
    Abstract: A dynamic tag match circuit (10) has exclusive-OR gates (18) each of which receives one bit of an address signal (A) from cache tag RAM, an inverted bit of the address signal (NA) from the cache tag RAM, and one bit of an address signal (B) from an address translator. The exclusive-OR gates (18) are in parallel to each other and output a hit signal which is low only when a match occurs between the two address signals. Additionally, the hit signal is low only when the results of a force miss circuit (14) indicate that a force miss should not occur. The dynamic tag match circuit (10) further has a pull-up circuit (16) for precharging the output of the circuit (16) and for holding the output of the circuit (16) at one of the two logical levels. The force miss circuit (14) advantageously incorporates logic which coordinates the timing of the force miss evaluation with the arrival of the address (A) from the cache tag RAM.
    Type: Grant
    Filed: May 1, 1996
    Date of Patent: June 9, 1998
    Assignee: Hewlett-Packard Company
    Inventor: John G. McBride
  • Patent number: 5761719
    Abstract: A computer processor architecture which employs an on-chip cache macro and an on-chip memory map is described. The memory map contains indicia of the cachability of different segments of off-chip memory, preferably along with an indication of the read/write status of each off-chip memory segment. A processor generated address signal is then compared on-chip with the memory map to ascertain whether the generated signal falls within a segment which is cachable or uncachable and which is read-only or read/write.
    Type: Grant
    Filed: June 6, 1995
    Date of Patent: June 2, 1998
    Assignee: International Business Machines Corporation
    Inventors: Stephen William Mahin, Kevin William McCullen, Sebastian Theodore Ventrone, Daniel Mathew Wronski
  • Patent number: 5749093
    Abstract: An information processing system includes a central processing unit, a main storage, a main storage controller for controlling the main storage, a cache memory having a content of at least one part of addresses stored in the main storage, at least one DMA controller which is capable of referring to the main storage and a DMA address translation unit for translating a logical address outputted from the DMA controller into a physical address for referring to the main storage. The DMA address translation unit has a flag representing whether or not the cache memory is referred to on DMA. The main storage controller performs either of reference to the cache memory or direct reference to the main storage based upon the flag on DMA.
    Type: Grant
    Filed: February 14, 1995
    Date of Patent: May 5, 1998
    Assignees: Hitachi, Ltd., Hitachi Computer Electronics Co., Ltd.
    Inventors: Kazushi Kobayashi, Takeshi Aoki, Koichi Okazawa, Ichiharu Aburano
  • Patent number: 5749094
    Abstract: A data processing system is described having a central processing unit (CPU) 4, a memory management unit (MMU) 6 and a cache memory 8. The CPU 4 makes cache writes in the same clock cycle that the data is output from the CPU 4. In a following clock cycle, the MMU 6 produces a signal IC indicating whether that storage operation was invalid. If the storage operation was invalid, then a flag associated with a cache storage line storing a plurality of output data words is set to indicate such invalid storage.
    Type: Grant
    Filed: November 12, 1996
    Date of Patent: May 5, 1998
    Assignee: Advanced Risc Machines Limited
    Inventor: David Vivian Jaggar