Cache Bypassing Patents (Class 711/138)
-
Patent number: 6681293Abstract: A method and apparatus for purging data from a middle cache level without purging the corresponding data from a lower cache level (i.e., a cache level closer to the processor using the data), and replacing the purged first data with other data of a different memory address than the purged first data, while leaving the data of the first cache line in the lower cache level. In some embodiments, in order to allow such mid-level purging, the first cache line must be in the “shared state” that allows reading of the data, but does not permit modifications to the data (i.e., modifications that would have to be written back to memory). If it is desired to modify the data, a directory facility will issue a purge to all caches of the shared-state data for that cache line, and then the processor that wants to modify the data will request an exclusive-state copy to be fetched to its lower-level cache and to all intervening levels of cache.Type: GrantFiled: August 25, 2000Date of Patent: January 20, 2004Assignee: Silicon Graphics, Inc.Inventors: Doug Solomon, Asgeir T. Eiriksson, Yuval Koren, Givargis G. Kaldani
-
Patent number: 6681297Abstract: A digital system is provided with a several processors (1302), a shared level two (L2) cache (1300) having several segments per entry with associated tags, and a level three (L3) physical memory. Each tag entry includes a task-ID qualifier field and a resource ID qualifier field. Data is loaded into various lines in the cache in response to cache access requests when a given cache access request misses. After loading data into the cache in response to a miss, a tag associated with the data line is set to a valid state. In addition to setting a tag to a valid state, qualifier values are stored in qualifier fields in the tag. Each qualifier value specifies a usage characteristic of data stored in an associated data line of the cache, such as a task ID. A miss counter (532) counts each miss and a monitoring task (1311) determines a miss rate for memory requests. If a selected miss rate threshold value is exceeded, the digital system is reconfigured in order to reduce the miss rate.Type: GrantFiled: August 17, 2001Date of Patent: January 20, 2004Assignee: Texas Instruments IncorporatedInventors: Gerard Chauvel, Dominique D'Inverno, Serge Lasserre
-
Patent number: 6671781Abstract: A circuit comprising a cache memory, a memory management unit and a logic circuit. The cache memory may be configured as a plurality of associative sets. The memory management unit may be configured to determine a data tag from an address of a data item. The logic circuit may be configured to (i) determine a selected set from the plurality of associative sets that produces a cache-hit for the data tag, (ii) buffer the address and the data item during a cycle, and (iii) present the data item to the cache memory for storing in the selected set during a subsequent cycle.Type: GrantFiled: December 8, 2000Date of Patent: December 30, 2003Assignee: LSI Logic CorporationInventor: Frank Worrell
-
Patent number: 6671780Abstract: A modified least recently allocated cache enables a computer to use a modified least recently allocated cache block replacement policy. In a first embodiment, an indicator of the least recently allocated cache block is tracked. When a cache block is referenced, the referenced cache block is compared with the least recently allocated cache block indicator. If the two identify the same cache block, the least recently allocated cache block indicator is adjusted to identify a different cache block. This adjustment prevents the most recently referenced cache block from being replaced. In an alternative embodiment, the most recently referenced cache block is similarly tracked, but the least recently allocated cache block is not immediately adjusted. Only when a new cache block is to be a located are the least recently allocated cache block indicator and the most recently referenced cache block indicator compared.Type: GrantFiled: May 31, 2000Date of Patent: December 30, 2003Assignee: Intel CorporationInventors: Shih-Lien L. Lu, Konrad Lai
-
Publication number: 20030236961Abstract: Memory managements systems and methods that may be employed, for example, to provide efficient management of memory for network systems. The disclosed systems and methods may consider cost-benefit trade-off between the cache value of a particular memory unit versus the cost of caching the memory unit and may utilize a multi-layer queue management structure to manage buffer/cache memory in an integrated fashion. The disclosed systems and methods may be implemented as part of an information management system, such as a network proceessing system that is operable to process over-size data objects communicated via a network environment, and that may include a network processor operable to process network-communicated information and a memory management-system operable to manage disposition of individual memory units of over-size data objects based upon one or more parameters, such as one or more parameters reflecting the cost and value associated with maintaining the information in integrated buffer/cache memory.Type: ApplicationFiled: March 1, 2001Publication date: December 25, 2003Inventors: Chaoxin C. Qiu, Mark J. Conrad, Scott C. Johnson, Theodore S. Webb
-
Patent number: 6665747Abstract: One embodiment of the present invention provides a system for processing a request directed to a secondary storage system. The system operates by receiving the request at an interface of the secondary storage system. This request specifies an operation to be performed on the secondary storage system, a location within the secondary storage system to which the request is directed, and an address of a target buffer located outside of the secondary storage system for holding data involved in the request. Next, the system processes the request by transferring data between the location within the secondary storage system and the target buffer located outside of the secondary storage system. If the target buffer is located within a page cache, processing the request involves communicating with the target buffer located within the page cache.Type: GrantFiled: October 10, 2000Date of Patent: December 16, 2003Assignee: Sun Microsystems, Inc.Inventor: Siamak Nazari
-
Patent number: 6654860Abstract: A memory controller generates speculative and non-speculative memory access requests. Several approaches are used to prevent speculative memory access requests from interfering with non-speculative memory access requests. When a request queue is full and contains at least one speculative request, that request is replaced in the memory access request queue with a non-speculative request. A counter associated with a speculative memory access request counts memory access requests. When a predetermined count value is reach, the speculative memory access request is assumed to be stale and retired from the request queue, thereby reducing possible interference by speculative accesses with non-speculative accesses and/or avoiding wasted bandwidth utilization by stale speculative access requests.Type: GrantFiled: July 27, 2000Date of Patent: November 25, 2003Assignee: Advanced Micro Devices, Inc.Inventors: Geoffrey S. S. Strongin, Qadeer Ahmad Qureshi
-
Patent number: 6651136Abstract: The cache keeps regularly accessed disk I/O data within RAM that forms part of a computer systems main memory. The cache operates across a network of computers systems, maintaining cache coherency for the disk I/O devices that are shared by the multiple computer systems within that network. Read access for disk I/O data that is contained within the RAM is returned much faster than would occur if the disk I/O device was accessed directly. The data is held in one of three areas of the RAM for the cache, dependent on the size of the I/O access. The total RAM containing the three areas for the cache does not occupy a fixed amount of a computers main memory. The RAM for the cache grows to contain more disk I/O data on demand and shrinks when more of the main memory is required by the computer system for other uses.Type: GrantFiled: January 16, 2002Date of Patent: November 18, 2003Assignee: SuperSpeed Software, Inc.Inventor: James I Percival
-
Patent number: 6651143Abstract: An invalidation buffer is associated with each cache wherein either multiple processors and/or multiple caches maintain cache coherency. Rather than to decode the addresses and interrogate the cache directory to determine if data requested by an incoming command is in a cache, the invalidation buffer is quickly checked to determine if the data associated with the requested data has been recently invalidated. If so and if the command is not intended to replace the recently invalidated data, then the tag and data array of the cache are immediately bypassed to save precious processor time. If lower level caches maintain the same cache coherency and are accessed only through an adjacent cache, then those lower level caches may also be bypassed and a cache miss can be directed immediately to memory. In a multiprocessor system, such as NUMA, COMA, SMP, where other processors may access different cache levels independent of the adjacent cache level, then each invalidation buffer is checked.Type: GrantFiled: December 21, 2000Date of Patent: November 18, 2003Assignee: International Business Machines CorporationInventor: Farnaz Mounes-Toussi
-
Patent number: 6647433Abstract: A method for securing port bypass circuit settings is presented comprising issuing one or more command(s) to one or more inputs of a general purpose input/output (GPIO) system, wherein the command(s) cause a first output of the GPIO system associated with a first input of the multiple inputs to issue a control signal to a latch associated with a port bypass circuit (PBC) addressed in the received command(s), and a second output of the GPIO system associated with a second of the multiple inputs of the GPIO system to issue a clock signal to a latch associated a PBC addressed in the received command(s). If command(s) received at the first and second inputs are consistent with changing the state of a common PBC, the control signal and the clock signal are sent to a single latching device, which latches the control signal to the addressed PBC changing the state of the PBC. If the command(s) are not consistent, the control and clock signal(s) are not received by a common latch, and the PBC states remain unchanged.Type: GrantFiled: August 14, 2000Date of Patent: November 11, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Douglas Todd Hayden, Jay D Reeves
-
Patent number: 6647466Abstract: A system for adaptively bypassing one or more higher cache levels following a miss in a lower level of a cache hierarchy is described. Each cache level preferably includes a tag store containing address and state information for each cache line resident in the respective cache. When an invalidate request is received at a given cache hierarchy, each cache level is searched for the address specified by the invalidate request. When an address match is detected, the state of the respective cache line is changed to the invalid state, although the address of the cache line is left in the tag store. Thereafter, if the processor or entity associated with this cache hierarchy issues its own request for this same cache line, the cache hierarchy begins searching the tag store of each level starting with the lowest cache level.Type: GrantFiled: January 25, 2001Date of Patent: November 11, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventor: Simon C. Steely, Jr.
-
Patent number: 6643745Abstract: A computer system is disclosed. The computer system includes a higher level cache, a lower level cache, a decoder to decode instructions, and a circuit coupled to the decoder. In one embodiment, the circuit, in response to a single decoded instruction, retrieves data from external memory and bypasses the lower level cache upon a higher level cache miss. In another embodiment, the circuit, in response to a first decoded instruction, issues a request to retrieve data at an address from external memory to place said data only in the lower level cache, detects a second cacheable decoded instruction to said address, and places said data in the higher level cache.Type: GrantFiled: March 31, 1998Date of Patent: November 4, 2003Assignee: Intel CorporationInventors: Salvador Palanca, Niranjan L. Cooray, Angad Narang, Vladimir Pentkovski, Steve Tsai, Subramaniam Maiyuran, Jagannath Keshava, Hsien-Hsin Lee, Steve Spangler, Suresh Kuttuva, Praveen Mosur
-
Publication number: 20030200397Abstract: A memory controller provides memory line caching and memory transaction coherency by using at least one memory controller agent. A memory controller in accordance with the present invention includes at least one memory controller agent, an incoming memory transaction dispatch unit, and an outgoing memory transaction completion unit. Each memory controller agent has a memory line memory controller and a memory line coherency controller, along with a cache memory capable of caching the contents of a memory line along with coherency information for the memory line. Memory transactions are received from cacheable entities of a computer system at the incoming memory transaction dispatch unit, and are then presented to one or more agents. For each incoming transaction, one of the agents will accept the transaction. If multiple memory read transactions are received for a single memory line, the agents will configure themselves into a linked list to queue up the requests.Type: ApplicationFiled: May 14, 2003Publication date: October 23, 2003Inventors: Curtis R. McAllister, Robert C. Douglas
-
Patent number: 6636946Abstract: In a computer or microprocessor system having a plurality of resources making memory requests, a caching system includes a source tag generator which, depending on the embodiment, could reside in the requesting system resource, in a bus arbiter, or in a combination of a bus arbiter and a switch arbiter, or elsewhere. The system also includes cache control circuitry capable of using the source tag to make cacheability decisions. The cache control circuitry, and therefore the cacheability decisions, could be fixed—e.g., by a user—or could be alterable based on a suitable algorithm—similar, e.g., to a least-recently-used algorithm—that monitors cache usage and memory requests. The caching system is particularly useful where the cache being controlled is large enough to cache the results of I/O and similar requests and the requesting resources are I/O or similar resources outside the core logic chipset of the computer system.Type: GrantFiled: March 13, 2001Date of Patent: October 21, 2003Assignee: Micron Technology, Inc.Inventor: Joseph Jeddeloh
-
Publication number: 20030172235Abstract: In accordance with an embodiment of the present invention, a system for returning data comprises a storage array operable to store data received from at least one data source, a bypass circuit communicatively coupled with the storage array and operable to simultaneously stage data received from the at least one data source and a read data storage controller communicatively coupled with the storage array and the bypass circuit and operable to select a data return path of minimum latency from a plurality of data return paths for returning data selected from one of the storage array and the bypass circuit, based at least in part on at least one tag associated with each of the at least one data source, to a requesting device.Type: ApplicationFiled: February 27, 2003Publication date: September 11, 2003Inventors: George Thomas Letey, Jeffrey G. Hargis, Michael Kennard Tayler, Erin Antony Handgen
-
Publication number: 20030163647Abstract: A host coupled to a switched fabric including one or more fabric-attached I/O controllers.Type: ApplicationFiled: December 17, 1999Publication date: August 28, 2003Inventors: DONALD F. CAMERON, FRANK L. BERRY
-
Publication number: 20030149837Abstract: Method and apparatus for transferring data between a host device and a data storage device having a first memory space (such as a buffer) and a second memory space (such as magnetic discs). Data are stored on the discs in host-addressable data sectors. The data storage device is configured to operate in a local mode of operation and a nonlocal mode of operation. During the local mode, nonrequested user data are retrieved from the discs and placed into the buffer in anticipation of a future request for the nonrequested user data. During nonlocal mode, such nonrequested user data are not retrieved. An interface circuit monitors host data access patterns and dynamically switches between the nonlocal and local modes in relation to proximity of a data sector address of each most recently received read command to data sector addresses associated with previously received read commands.Type: ApplicationFiled: February 22, 2002Publication date: August 7, 2003Applicant: Seagate Technology LLCInventors: Kenny T. Coker, Edwin S. Olds, Jack A. Mobley
-
Patent number: 6598121Abstract: A system and method for hierarchically caching objects includes one or more level 1 nodes, each including at least one level 1 cache; one or more level 2 nodes within which the objects are permanently stored or generated upon request, each level 2 node coupled to at least one of the one or more level 1 nodes and including one or more level 2 caches; and means for storing, in a coordinated manner, one or more objects in at least one level 1 cache and/or at least one level 2 cache, based on a set of one or more criteria.Type: GrantFiled: November 6, 2001Date of Patent: July 22, 2003Assignee: International Business Machines, Corp.Inventors: James R. H. Challenger, Paul Michael Dantzig, Daniel Manuel Dias, Arun Kwangil Iyengar, Eric M. Levy-Abegnoli
-
Publication number: 20030126372Abstract: A cache coherency arrangement with support for pre-fetching ownership, to enhance inbound bandwidth for single leaf and multiple leaf, input-output interfaces, with shared memory space is disclosed. Embodiments comprise ownership stealing and address matching to reduce transaction latency and prevent deadlock and/or starvation. Several embodiments may also comprise cache to reduce transaction latency and logic to populate the cache.Type: ApplicationFiled: January 2, 2002Publication date: July 3, 2003Inventor: Tony S. Rand
-
Patent number: 6587928Abstract: Requests are identified as being for a cacheable object or a non-cacheable object according to information included in a Uniform Resource Locator (URL) associated with the object. For example, the URL may include a port designation for requests for cacheable objects (e.g., images and the like). Thus, a request may be recognized as being for a cacheable or non-cacheable object according to the port on which the request is made. In some cases, requests for non-cacheable objects may be made on port 80. A router may be thus configured to recognize a request as being for a cacheable object or a non-cacheable object according to a port on which the request is received and redirect it to a cache as appropriate.Type: GrantFiled: February 28, 2000Date of Patent: July 1, 2003Assignee: Blue Coat Systems, Inc.Inventors: Alagu S. Periyannan, Michael D. Kellner
-
Patent number: 6574708Abstract: A cache is coupled to receive an access which includes a cache allocate indication. If the access is a miss in the cache, the cache either allocates a cache block storage location to store the cache block addressed by the access or does not allocate a cache block storage location in response to the cache allocate indication. In one implementation, the cache is coupled to an interconnect with one or more agents. In such an implementation, the cache accesses may be performed in response to transactions on the interconnect, and the transactions include the cache allocate indication. Thus, the source of a cache access specifies whether or not to allocate a cache block storage location in response to a miss by the cache access. The source may use a variety of mechanisms for generating the cache allocate indication.Type: GrantFiled: May 18, 2001Date of Patent: June 3, 2003Assignee: Broadcom CorporationInventors: Mark D. Hayter, Joseph B. Rowlands
-
Patent number: 6564299Abstract: An addressable circuit configured to control a definition of an addressable range for the circuit. The circuit may comprise at least one register, at east one flag, an input and control logic. The register may be configured to define a range used for determining an addressable range for the circuit. The flag may be configured to define whether a predetermined range is to be inverted for determining the addressable range for the circuit. The input may be configured to receive an address for an access to the circuit. The control logic may be configured to process the received address to determine whether the received address is within the addressable range for the circuit, the control logic being responsive to the register and to the flag for determining the addressable range therefrom.Type: GrantFiled: July 30, 2001Date of Patent: May 13, 2003Assignee: LSI Logic CorporationInventor: Stefan Auracher
-
Patent number: 6560692Abstract: The data processing circuit of this invention enables efficient description and execution of processes that act upon the stack pointer, using short instructions. It also enables efficient description of processes that save and restore the contents of registers, increasing the speed of processing of interrupts and subroutine calls and returns. A CPU that uses this data processing circuit comprises a dedicated stack pointer register SP and uses an instruction decoder to decode a group of dedicated stack pointer instructions that specify the SP as an implicit operand. This group of dedicated stack pointer instructions are implemented in hardware by using general-purpose registers, the PC, the SP, an address adder, an ALU, a PC incrementer, internal buses, internal signal lines, and external buses.Type: GrantFiled: May 20, 1997Date of Patent: May 6, 2003Assignee: Seiko Epson CorporationInventors: Makoto Kudo, Satoshi Kubota, Yoshiyuki Miyayama, Hisao Sato
-
Patent number: 6560680Abstract: The present invention relates to a computer system comprising at least one requesting agent, a system controller and a memory subsystem comprising a main memory and a noncacheable subset of main memory physically distinct from the main memory.Type: GrantFiled: November 26, 2001Date of Patent: May 6, 2003Assignee: Micron Technology, Inc.Inventor: James W. Meyer
-
Patent number: 6560679Abstract: A digital data processing system is provided which includes a digital data processor, a cache memory having a tag RAM and a data RAM, and a controller for controlling accesses to the cache memory. The controller stores state information on access type, operation mode and cache hit/miss associated with the most recent access to the tag RAM, and controls a current access to the tag RAM just after the preceding access based on the state information and a portion of a set field of a main memory address for the second access. The controller determines whether the current access is applied to the same cache line that was accessed in the first access based on the state information and a portion of a set field of the main memory address for the second access, and allows the current access to be skipped when the current access is applied to the same cache line that was accessed in the preceding access.Type: GrantFiled: December 20, 2000Date of Patent: May 6, 2003Assignee: Samsung Electronics Co., Ltd.Inventors: Hoon Choi, Myung-Kyoon Yim
-
Publication number: 20030046494Abstract: Method and apparatus for conditioning program control flow on the presence of requested data in a cache memory. In a data processing system that includes a cache memory and a system memory coupled to a processor, in various embodiments program control flow is conditionally changed based on whether the data referenced in an instruction are present in the cache memory. When an instruction that includes a data reference and an alternate control path is executed, the control flow of the program is changed in accordance with the alternate control path if the referenced data are not present in the cache memory. The alternate control path is either explicitly specified or implicit in the instruction.Type: ApplicationFiled: August 29, 2001Publication date: March 6, 2003Inventor: Michael L. Ziegler
-
Patent number: 6516390Abstract: The invention is directed to techniques for accessing data within a data storage system having a circuit board that includes both a front-end circuit for interfacing with a host and a back-end circuit for interfacing with a storage device. To move data between the host and the storage device, an exchange of data between the front-end circuit and the back-end circuit can occur within the circuit board thus circumventing the cache of the data storage system. Such operation not only reduces traffic through the cache, but also shortens the data transfer latency. In one arrangement, a data storage system includes a cache, a first front-end circuit that operates as an interface between the cache and a first host, a second front-end circuit that operates as an interface between the cache and a second host, a first storage device (e.g., a disk drive, tape drive, CDROM drive, etc.Type: GrantFiled: October 26, 2000Date of Patent: February 4, 2003Assignee: EMC CorporationInventors: Kendell A. Chilton, Daniel Castel
-
Publication number: 20030023780Abstract: A system and method for industrial control I/O forcing is provided. The invention includes a processor, shared memory and an I/O processor with cache memory. The invention provides for the cache memory to be loaded with I/O force data from the shared memory. The I/O processor performs I/O forcing utilizing the I/O force data stored in the cache memory. The invention further provides for the processor to notify the I/O processor in the event that I/O force data is altered during control program execution. The invention further provides for the I/O processor to refresh the cache memory (e.g., via a blocked write) after receipt of alteration of the I/O force data from the processor.Type: ApplicationFiled: July 25, 2001Publication date: January 30, 2003Inventors: Raymond R. Husted, Ronald E. Schultz, Dennis J. Dombrosky, David A. Karpuszka
-
Patent number: 6505309Abstract: A processing unit has an operation unit and a cache memory, and further has a debug support unit and a non-cache control circuit. The debug support unit outputs a debug mode signal when an address of a program being currently executed and an optional address set for debugging coincide with each other, and the non-cache control circuit controls the operation of the cache memory via the debug mode signal and outputs the debug mode signal externally of the processing unit.Type: GrantFiled: March 19, 1999Date of Patent: January 7, 2003Assignee: Fujitsu LimitedInventors: Toru Okabayashi, Yasushi Nagano
-
Patent number: 6502169Abstract: A system and method for detecting block(s)of data transferred to a disk array from a host processor system, in which the block(s) have unique, identifiable values or patterns, is provided. A direct memory access (DMA) engine is resident on the bus structure between the host and the disk array, which can be configured as a redundant array of independent disks (RAID). A cache memory is also resident on the bus and is adapted to cache write data from the host under control of a cache manager prior to storage thereof in the disk array. The DMA engine is adapted to detect predetermined patterns of data as such data is transferred over the bus therethrough. Such data can include a series of consecutive zeroes or another repetitive pattern.Type: GrantFiled: June 27, 2000Date of Patent: December 31, 2002Assignee: Adaptec, Inc.Inventor: Eric S. Noya
-
Patent number: 6473834Abstract: In a data processing system comprising a first level cache, a second level cache, and a processor return path, wherein only one of the first level cache and second level cache can control the processor return path at a given time, an improvement comprises a queue disposed between an output of the first level cache and the processor return path to buffer data output from the first level cache so that the first level cache can continue to process memory requests even though the second level cache has control of the processor return path.Type: GrantFiled: December 22, 1999Date of Patent: October 29, 2002Assignee: UnisysInventors: Steven T. Hurlock, Stanley P. Naddeo
-
Publication number: 20020156981Abstract: Computer systems and methods that provide for cacheable above one megabyte system management random access memory (SMRAM). The systems and methods comprise a central processing unit (CPU) having a processor and a system management interrupt (SMI) dispatcher, a cache coupled to the CPU, and a chipset memory controller that interfaces the CPU to a memory. The memory includes system memory and the system management random access memory. The systems and methods un-cache the SMRAM while operating outside of system management mode, transfer CPU operation to system management mode upon execution of a system management interrupt (SMI), and change cache settings to cache the extended memory and system management random access memory with write-through. The systems and methods then change cache settings to cache the extended memory with write-back and un-cache the SMRAM upon execution of an resume instruction.Type: ApplicationFiled: April 18, 2001Publication date: October 24, 2002Inventor: HonFei Chong
-
Publication number: 20020156962Abstract: Methods of widening the permission for a memory access in a data processing system having a virtual cache memory and a translation lookaside buffer are disclosed. A memory access operation is initiated on a predetermined memory location based on logical address information and permission information associated with the memory access operation. The virtual cache memory is accessed and a determination may be made if there is a match between logical address information of the memory access operation and logical address information stored in the entries of the virtual cache. In the event of a match, then a determination may be made based on the permission information of the memory access operation and the permission information of the particular entry of the virtual cache memory as to whether the memory access operation is permitted.Type: ApplicationFiled: June 10, 2002Publication date: October 24, 2002Inventors: Rajesh Chopra, Shinichi Yoshioka, Mark Debbage, David Shepherd
-
Patent number: 6470429Abstract: An apparatus for identifying requests to main memory as non-cacheable in a computer system with multiple processors includes a main memory, memory cache, processor and cache coherence directory all coupled to a host bridge unit (North bridge). The processor transmits requests for data to the main memory via the host bridge unit. The host bridge unit includes a cache coherence controller that implements a protocol to maintain the coherence of data stored in each of the processor caches in the computer system. A cache coherence directory is connected to the cache coherence controller. After receiving the request for data from main memory, the host bridge unit identifies requests for data to main memory as cacheable or non-cacheable. If the data is non-cacheable, then the host bridge unit does not request the cache coherence controller to perform a cache coherence directory lookup to maintain the coherence of the data.Type: GrantFiled: December 29, 2000Date of Patent: October 22, 2002Assignee: Compaq Information Technologies Group, L.P.Inventors: Phillip M. Jones, Robert Allan Lester
-
Patent number: 6470428Abstract: A cache controller is disclosed that includes a first means for determining when data specified by a memory address requested by the processor is absent from the cache, and a second means for determining when the processor reads sequential memory addresses. The second means is activated when the first means detects that data is absent from the cache and causes the cache controller to (i) permit data to be supplied from the main memory to the processor, even when the data is available in the cache; (ii) inhibit the first means from determining whether requested data is available in the cache; and (iii) update the cache with data supplied to the processor from the main memory.Type: GrantFiled: June 26, 2000Date of Patent: October 22, 2002Assignee: Virata LimitedInventors: David Russell Milway, Fash Nowashdi
-
Patent number: 6463510Abstract: An apparatus for identifying memory requests originating on remote I/O devices as non-cacheable in a computer system with multiple processors includes a main memory, memory cache, processor, cache coherence directory and cache coherence controller all coupled to a host bridge unit (North bridge). The I/O device transmits requests for data to an I/O bridge unit. The I/O bridge unit forwards the request for data to the host bridge unit and asserts a sideband signal to the host bridge unit if the request is for non-cacheable data. The sideband signal informs the host bridge unit that the memory request is for non-cacheable data and that the cache coherence controller does not need to perform a cache coherence directory lookup. For cacheable data, the cache coherence controller performs a cache coherence directory lookup to maintain the coherence of data stored in a plurality of processor caches in the computer system.Type: GrantFiled: December 29, 2000Date of Patent: October 8, 2002Assignee: Compaq Information Technologies Group, L.P.Inventors: Phillip M. Jones, Robert L. Woods
-
Publication number: 20020143868Abstract: A method and apparatus in a data processing system for caching data in an internal cache and in an external cache A set of fragments is received for caching. A location is identified to store each fragment within the plurality of fragments based on a rate of change of data in each fragment. The set of fragments is stored in the internal cache and the external cache using the location identified for each fragment within the plurality of fragments.Type: ApplicationFiled: May 29, 2002Publication date: October 3, 2002Inventors: James R.H. Challenger, George Prentice Copeland, Paul Michael Dantzig, Arun Kwangil Iyengar, Matthew Dale McClain
-
Publication number: 20020133673Abstract: In a computer or microprocessor system having a plurality of resources making memory requests, a caching system includes a source tag generator which, depending on the embodiment, could reside in the requesting system resource, in a bus arbiter, or in a combination of a bus arbiter and a switch arbiter, or elsewhere. The system also includes cache control circuitry capable of using the source tag to make cacheability decisions. The cache control circuitry, and therefore the cacheability decisions, could be fixed—e.g., by a user—or could be alterable based on a suitable algorithm—similar, e.g., to a least-recently-used algorithm—that monitors cache usage and memory requests. The caching system is particularly useful where the cache being controlled is large enough to cache the results of I/O and similar requests and the requesting resources are I/O or similar resources outside the core logic chipset of the computer system.Type: ApplicationFiled: March 13, 2001Publication date: September 19, 2002Applicant: MICRON TECHNOLOGY, INC., a corporation of DelawareInventor: Joseph Jeddeloh
-
Patent number: 6449698Abstract: A method and system for bypassing a prefetch data path is provided. Each transaction within a system is tagged, and as transactions are issued for retrieving data, the system has a data prefetch unit for prefetching data from a processor, a memory subsystem, or an I/O agent into a prefetch data buffer. A prefetch data buffer entry is allocated for a data prefetch transaction, and the data prefetch transaction is issued. While the prefetch transaction is pending, a read transaction is received from a transaction requestor. The address for the read transaction is compared with the addresses of the pending data prefetch transactions, and in response to an address match, the prefetch data buffer entry for the matching prefetch transaction is checked to determine whether data has been received for the data prefetch transaction.Type: GrantFiled: August 26, 1999Date of Patent: September 10, 2002Assignee: International Business Machines CorporationInventors: Sanjay Raghunath Deshpande, David Mui, Praveen S. Reddy
-
Patent number: 6442652Abstract: The effect of Single Event Upsets (SEUs) occurring in cache memory (103) utilized in satellites is reduced. The idle time of a processor (102), utilizing cache memory (103), is monitored. If processor (102) idle time reaches a predetermined minimum (205), cache memory (103) is engage. When processor (102) idle time subsequently reaches a predetermined maximum threshold (203), cache memory (103) is disabled.Type: GrantFiled: September 7, 1999Date of Patent: August 27, 2002Assignee: Motorola, Inc.Inventors: Jose Arnaldo Laboy, Bradley Robert Schaefer
-
Patent number: 6430657Abstract: Atomic memory operations are provided by using exportable “fetch and add” instructions and by emulating IA-32 instructions prepended with a lock prefix. In accordance with the present invention, a CPU includes a default control register that includes IA-32 lock check enable bit (LC) that when set to “1”, causes an IA-32 atomic memory reference to raise an IA-32 intercept lock fault. An IA-32 intercept lock fault handler branches to appropriate code to atomically emulate the instruction. Furthermore, the present invention defines an exportable fetch and add (FETCHADD) instruction that reads a memory location indexed by a first register, places the contents read from the memory location into a second register, increments the value read from the memory location, and stores the sum back to the memory location.Type: GrantFiled: October 12, 1998Date of Patent: August 6, 2002Assignee: Institute for the Development of Emerging Architecture L.L.C.Inventors: Millind Mittal, Martin J. Whittaker, Gary N. Hammond, Jerome C. Huck
-
Publication number: 20020099909Abstract: The present invention relates to a computer system comprising at least one requesting agent, a system controller and a memory subsystem comprising a main memory and a noncacheable subset of main memory physically distinct from the main memory.Type: ApplicationFiled: November 26, 2001Publication date: July 25, 2002Inventor: James W. Meyer
-
Patent number: 6418510Abstract: A cooperative disk cache management and rotational positioning optimization (RPO) method for a data storage device, such as a disk drive, makes cache decisions that decrease the total access times for all data. The cache memory provides temporary storage for data either to be written to disk or that has been read from disk. Data access times from cache are significantly lower than data access times from the storage device, and it is advantageous to store in cache data that is likely to be referenced again. For each data block that is a candidate to store in cache, a cost function is calculated and compared with analogous cost functions for data already in cache. The data having the lowest cost function is removed from cache and replaced with data having a higher cost function.Type: GrantFiled: September 14, 2000Date of Patent: July 9, 2002Assignee: International Business Machines CorporationInventor: Bernd Lamberts
-
Patent number: 6418516Abstract: A method of operating a multi-level memory hierarchy of a computer system and apparatus embodying the method, wherein instructions issue having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions and treats instructions in a different manner when they are loaded speculatively. These prefetch requests can be demand load requests, where the processing unit will need the operand data or instructions, or speculative load requests, where the processing unit may or may not need the operand data or instructions, but a branch prediction or stream association predicts that they might be needed. The load requests are sent to the lower level cache when the upper level cache does not contain the value required by the load.Type: GrantFiled: July 30, 1999Date of Patent: July 9, 2002Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, Leo James Clark, John Steven Dodson, Guy Lynn Guthrie, William John Starke
-
Publication number: 20020087803Abstract: An apparatus for identifying memory requests originating on remote I/O devices as non-cacheable in a computer system with multiple processors includes a main memory, memory cache, processor, cache coherence directory and cache coherence controller all coupled to a host bridge unit (North bridge). The I/O device transmits requests for data to an I/O bridge unit. The I/0 bridge unit forwards the request for data to the host bridge unit and asserts a sideband signal to the host bridge unit if the request is for non-cacheable data. The sideband signal informs the host bridge unit that the memory request is for non-cacheable data and that the cache coherence controller does not need to perform a cache coherence directory lookup. For cacheable data, the cache coherence controller performs a cache coherence directory lookup to maintain the coherence of data stored in a plurality of processor caches in the computer system.Type: ApplicationFiled: December 29, 2000Publication date: July 4, 2002Inventors: Phillip M. Jones, Robert L. Woods
-
Patent number: 6415360Abstract: A processor employs an SMC check apparatus. The SMC check apparatus may minimize the number of explicit SMC checks performed for non-cacheable stores. Cacheable stores may be handled using any suitable mechanism. For non-cacheable stores, the processor tracks whether or not the in-flight instructions are cached. Upon encountering a non-cacheable store, the processor inhibits an SMC check if the in-flight instructions are cached. Since, for performance reasons, the code stream is often cached, non-cacheable stores may frequently be able to skip an explicit, complex, and time consuming SMC check. Performance of non-cacheable stores (and memory throughput overall) may be increased. The handling of non-cacheable stores as described herein may be particularly beneficial to video data manipulations, which may frequently be of a non-cacheable memory type and which may be important to the overall performance of a computer system.Type: GrantFiled: May 18, 1999Date of Patent: July 2, 2002Assignee: Advanced Micro Devices, Inc.Inventors: William Alexander Hughes, William Kurt Lewchuk, Gerald D. Zuraski, Jr.
-
Publication number: 20020083271Abstract: An invalidation buffer is associated with each cache wherein either multiple processors and/or multiple caches maintain cache coherency. Rather than to decode the addresses and interrogate the cache directory to determine if data requested by an incoming command is in a cache, the invalidation buffer is quickly checked to determine if the data associated with the requested data has been recently invalidated. If so and if the command is not intended to replace the recently invalidated data, then the tag and data array of the cache are immediately bypassed to save precious processor time. If lower level caches maintain the same cache coherency and are accessed only through an adjacent cache, then those lower level caches may also be bypassed and a cache miss can be directed immediately to memory. In a multiprocessor system, such as NUMA, COMA, SMP, where other processors may access different cache levels independent of the adjacent cache level, then each invalidation buffer is checked.Type: ApplicationFiled: December 21, 2000Publication date: June 27, 2002Applicant: International Business Machines CorporationInventor: Farnaz Mounes-Toussi
-
Publication number: 20020069327Abstract: A digital system and method of operation is provided in which several processing resources (340) and processors (350) are connected to a shared translation lookaside buffer (TLB) (300, 310(n)) of a memory management unit (MMU) and thereby access memory and devices. These resources can be instruction processors, coprocessors, DMA devices, etc. Each entry location in the TLB is filled during the normal course of action by a set of translated address entries (308, 309) along with qualifier fields (301, 302, 303) that are incorporated with each entry. Operations can be performed on the TLB that are qualified by the various qualifier fields. A command (360) is sent by an MMU manager to the control circuitry of the TLB (320) during the course of operation. Commands are sent as needed to flush (invalidate), lock or unlock selected entries within the TLB. Each entry in the TLB is accessed (362, 368) and the qualifier field specified by the operation command is evaluated (364).Type: ApplicationFiled: August 17, 2001Publication date: June 6, 2002Inventor: Gerard Chauvel
-
Patent number: 6401187Abstract: The present invention provides a memory access optimizing method which judges an access method suitable for each of memory accesses and executes the preload optimization and prefetch optimization, according to the judgement result, for an architecture equipped with a prefetch mechanism to write the data on a main storage device into a cache memory and a preload mechanism to write the data on the main storage device into a register without writing it into the cache memory. The memory access method judging step analyzes whether or not there is a designation of a memory access method by a user. Moreover, the memory access method judging step investigates whether or not the data are already in a cache memory, whether or not the data compete with other data for a cache, whether or not the data are to be referred to again later, and whether or not the data fulfill the restriction on register resources.Type: GrantFiled: June 12, 2000Date of Patent: June 4, 2002Assignee: Hitachi, Ltd.Inventors: Keiko Motokawa, Hiroyasu Nishiyama, Sumio Kikuchi
-
Publication number: 20020065989Abstract: A multiprocessor system (20,102, 110) uses multiple operating systems or a single operating system uses &mgr;TLBs (36) and a shared TLB subsystem (48) to provide efficient and flexible translation of virtual addresses to physical addresses. Upon misses in the &mgr;TLB and shared TLB, access to a translation table in external memory (54) can be made using either a hardware mechanism (100) or a software function. The translation can be flexibly based on a number of criteria, such as a resource identifier and a task identifier. Slave processors, such as coprocessors (34) and DMA processors (24) can access the shared TLB 48 without master processor interaction for more efficient operation.Type: ApplicationFiled: August 17, 2001Publication date: May 30, 2002Inventors: Gerard Chauvel, Dominique D'Inverno, Serge Lasserre