Write-through Patents (Class 711/142)
-
Publication number: 20040193808Abstract: In a processor module having a local software visible data memory and a write through cache connected to an external memory space external to the processor module over a bus, a method and apparatus for supplementing the local software visible data memory utilizing the write through cache is disclosed which may comprise: a processor bus interface and memory management unit adapted to detect a processor write operation to a preselected location in the external memory space that is not currently a cached address line, that will cause a cache miss, to decode the write operation to the preselected external memory space location as a RAM emulation write operation and to place in the cache pseudo data at the respective address line in the cache, without executing a fetch and store from the actual external memory location in response to the cache miss.Type: ApplicationFiled: March 28, 2003Publication date: September 30, 2004Applicant: EMULEX CORPORATIONInventor: Thomas Vincent Spencer
-
Patent number: 6789167Abstract: A multiple-processor integrated circuit has convertible cache modules capable of operating in a local memory mode and a cache mode associated with at least one of its multiple processors. The integrated circuit also has at least one peripheral-specific apparatus for interfacing at least one of its processors to common peripheral devices. At least one processor is capable of operating as a general purpose processor when the convertible cache is operated in the cache mode, and as a processor of an intelligent peripheral when the convertible cache is operated in the local memory mode.Type: GrantFiled: March 6, 2002Date of Patent: September 7, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventor: Samuel Naffziger
-
Patent number: 6785775Abstract: A method of and apparatus for improving the scheduling efficiency of a data processing system using the facilities which maintain coherency of the system's level cache memories. These efficiencies result from monitoring the cache memory lines which indicate invalidation of a cache memory entry because of a storage operation within backing memory. This invalidity signal is utilized to generate a doorbell type interface indication of a new application entry within the work queue.Type: GrantFiled: March 19, 2002Date of Patent: August 31, 2004Assignee: Unisys CorporationInventor: Robert M. Malek
-
Patent number: 6785774Abstract: A multiprocessor data processing system comprising a plurality of processing units, a plurality of caches, that is each affiliated with one of the processing units, and processing logic that, responsive to a receipt of a first system bus response to a coherency operation, causes the requesting processor to execute operations utilizing super-coherent data. The data processing system further includes logic eventually returning to coherent operations with other processing units responsive to an occurrence of a pre-determined condition. The coherency protocol of the data processing system includes a first coherency state that indicates that modification of data within a shared cache line of a second cache of a second processor has been snooped on a system bus of the data processing system. When the cache line is in the first coherency state, subsequent requests for the cache line is issued as a Z1 read on a system bus and one of two responses are received.Type: GrantFiled: October 16, 2001Date of Patent: August 31, 2004Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, Guy Lynn Guthrie, William J. Starke, Derek Edward Williams
-
Publication number: 20040162950Abstract: Apparatus and methods relating to a cache coherency administrator. The cache coherency administrator can include a display to indicate a cache coherency status of a non-volatile cache.Type: ApplicationFiled: December 22, 2003Publication date: August 19, 2004Inventor: Richard L. Coulson
-
Patent number: 6779086Abstract: A multiprocessor data processing system comprising, in addition to a first and second processor having an respective first and second cache and a main cache directory affiliated with the first processor's cache, a secondary cache directory of the first cache, which contains a subset of cache line addresses from the main cache directory corresponding to cache lines that are in a first or second coherency state, where the second coherency state indicates to the first processor that requests issued from the first processor for a cache line whose address is within the secondary directory should utilize super-coherent data currently available in the first cache and should not be issued on the system interconnect. Additionally, the cache controller logic includes a clear on barrier flag (COBF) associated with the secondary directory, which is set whenever an operation of the first processor is issued to said system interconnect.Type: GrantFiled: October 16, 2001Date of Patent: August 17, 2004Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, Guy Lynn Guthrie, William J. Starke, Derek Edward Williams
-
Publication number: 20040148473Abstract: A data processing system (100, 600) has a memory hierarchy including a cache (124, 624) and a lower-level memory system (170, 650). A data element having a special write with inject attribute is received from a data producer (160, 640), such as an Ethernet controller. The data element is forwarded to the cache (124, 624) without accessing the lower-level memory system (170, 650). Subsequently at least one cache line containing the data element is updated in the cache (124, 624).Type: ApplicationFiled: January 27, 2003Publication date: July 29, 2004Inventors: William A. Hughes, Patrick Conway
-
Patent number: 6763435Abstract: A method for improving performance of a multiprocessor data processing system comprising snooping a request for data held within a shared cache line on a system bus of the data processing system whose cache contains an updated copy of the shared cache line, and responsive to a snoop of the request by the second processor, issuing a first response on the system bus indicating to the requesting processor that the requesting processor may utilize data currently stored within the shared cache line of a cache of the requesting processor. When the request is snooped by the second processor and the second processor decides to release a lock on the cache line to the requesting processor, the second processor issues a second response on the system bus indicating that the first processor should utilize new/coherent data and then the second processor releases the lock to the first processor.Type: GrantFiled: October 16, 2001Date of Patent: July 13, 2004Assignee: International Buisness Machines CorporationInventors: Ravi Kumar Arimilli, Guy Lynn Guthrie, William J. Starke, Derek Edward Williams
-
Patent number: 6760807Abstract: Adaptive write policy for handling host write commands to write-back system drives in a dual active controller environment. Method for adaptive write policy in data storage system, where data storage system includes host system connected to primary controller and alternate controller. Controllers are coupled to system drive that includes one or more disk storage devices. Primary is connected to first memory and alternate is connected to second memory. Primary and alternate manage data storage system in dual-active configuration. Primary controller receives host write command from host system and write data request includes host write data. When system drive is configured with write-back policy, primary determines whether host write command encompasses an entire RAID stripe, and if so, primary processes host write command in accordance with write-through policy. Otherwise, primary processes command in accordance with write-back policy.Type: GrantFiled: November 14, 2001Date of Patent: July 6, 2004Assignee: International Business Machines CorporationInventors: William A. Brant, William G. Deitz, Michael E. Nielson, Joseph G. Skazinski
-
Patent number: 6757790Abstract: The data storage facility includes a plurality of data storage devices coupled through multi-path connections to cache memory. A plurality of interfaces to host processors communicates with the cache memory and with cache tag controllers that define the cache memory again over multiple paths.Type: GrantFiled: February 19, 2002Date of Patent: June 29, 2004Assignee: EMC CorporationInventors: Steven R. Chalmer, Steven T. McClure, Brett D. Niver, Richard G. Wheeler
-
Patent number: 6745294Abstract: A method is provided for cache flushing in a computer system having a processor, a cache, a synchronization primitive detector, and a cache flush engine. The method includes providing a synchronization primitive from the processor into the computer system; detecting the synchronization primitive in the synchronization primitive detector; providing a trigger signal from the synchronization primitive detector in response to detection of the synchronization primitive; providing cache information from the recall unit into the computer system in response to the trigger signal; and flushing the cache in response to the cache information in the computer system.Type: GrantFiled: June 8, 2001Date of Patent: June 1, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: Kenneth Mark Wilson, Fong Pong, Lance Russell, Tung Nguyen, Lu Xu
-
Publication number: 20040093468Abstract: An arrangement and method for update of configuration cache data in a disk storage subsystem in which a cache memory (110) is updated using two-phase (220, 250) commit technique.Type: ApplicationFiled: June 20, 2003Publication date: May 13, 2004Applicant: International Business Machines CorporationInventors: David John Carr, Michael John Jones, Andrew Key, Robert Bruce Nicholson, William James Scales, Barry Douglas Whyte
-
Patent number: 6732124Abstract: A data processing system having an efficient logging mechanism which stores log records for repairing a file system when its consistency is lost. When there is a transaction attempting to update metadata stored in metadata volumes, a metadata loading unit reads the requested metadata objects out of the volumes and loads them to a metadata cache. At that time, a metadata manager updates its internal database to record from which metadata volume each metadata object has been fetched. Each time the transaction updates a metadata object in the cache, a log collection unit collects a copy of the updated metadata object, together with a volume ID which indicates its home metadata volume. The collected data is temporarily stored in a log buffer, and finally saved into a log volume by a log writing unit.Type: GrantFiled: February 9, 2000Date of Patent: May 4, 2004Assignee: Fujitsu LimitedInventors: Michihiko Koseki, Mamoru Yokoyama, Masashi Sumi, Satoru Yamaguchi, Sadayoshi Taniwaki, Seishiro Hamanaka
-
Patent number: 6725342Abstract: Apparatus and methods relating to a cache coherency administrator. The cache coherency administrator can include a display to indicate a cache coherency status of a non-volatile cache.Type: GrantFiled: September 26, 2000Date of Patent: April 20, 2004Assignee: Intel CorporationInventor: Richard L. Coulson
-
Patent number: 6725341Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.Type: GrantFiled: June 28, 2000Date of Patent: April 20, 2004Assignee: Intel CorporationInventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
-
Patent number: 6711650Abstract: A method for accelerating input/output operations within a data processing system is disclosed. Initially, a determination is initially made in a cache controller as to whether or not a bus operation is a data transfer from a first memory to a second memory without intervening communications through a processor, such as a direct memory access (DMA) transfer. If the bus operation is such data transfer, a determination is made in a cache memory as to whether or not the cache memory includes a copy of data from the data transfer. If the cache memory does not include a copy of data from the data transfer, a cache line is allocated within the cache memory to store a copy of data from the data transfer.Type: GrantFiled: November 7, 2002Date of Patent: March 23, 2004Assignee: International Business Machines CorporationInventors: Patrick Joseph Bohrer, Ramakrishnan Rajamony, Hazim Shafi
-
Patent number: 6704844Abstract: A method for increasing performance optimization in a multiprocessor data processing system. A number of predetermined thresholds are provided within a system controller logic and utilized to trigger specific bandwidth utilization responses. Both an address bus and data bus bandwidth utilization are monitored. Responsive to a fall of a percentage of data bus bandwidth utilization below a first predetermined threshold value, the system controller provides a particular response to a request for a cache line at a snooping processor having the cache line, where the response indicates to a requesting processor that the cache line will be provided. Conversely, if the percentage of data bus bandwidth utilization rises above a second predetermined threshold value, the system controller provides a next response to the request that indicates to any requesting processors that the requesting processor should utilize super-coherent data which is currently within its local cache.Type: GrantFiled: October 16, 2001Date of Patent: March 9, 2004Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, Guy Lynn Guthrie, William J. Starke, Derek Edward Williams
-
Patent number: 6681311Abstract: A translation lookaside buffer (TLB) that caches memory types of memory address ranges. A data unit includes a TLB which, in addition to caching page table entries including translated page base addresses of virtual page numbers as in a conventional TLB, also caches memory address range memory types provided by a memory type unit (MTU). In the case of a hit of a virtual address in the TLB, the TLB provides the memory type along with the page table entry, thereby avoiding the need for a serialized accessed to the MTU using the physical address output by the TLB. Logic which controls a processor bus access necessitated by the virtual address makes use of the memory type output by the TLB sooner than would be available from the MTU in conventional data units. If the MTU is updated, the TLB is flushed to insure consistency of memory type values.Type: GrantFiled: July 18, 2001Date of Patent: January 20, 2004Assignee: IP-First, LLCInventors: Darius D. Gaskins, G. Glenn Henry, Rodney E. Hooker
-
Patent number: 6675270Abstract: A method and system that enables independent burst lengths for reads and writes to a DRAM subsystem. Specifically, the method provides a mechanism by which read bursts may be longer than write bursts since there are statistically more reads than writes to the DRAM and only some beats of read data are modified and need to be re-written to memory. In the preferred embodiment, the differences in the burst length is controlled by an architected address tenure, i.e., a set of bits added to the read and write commands that specify the specific number of beats to read and/or write. The bits are set by the processor during generation of the read and write commands and prior to forwarding the commands to the memory controller for execution.Type: GrantFiled: April 26, 2001Date of Patent: January 6, 2004Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, Warren Edward Maule
-
Patent number: 6675262Abstract: A cache coherent distributed shared memory multi-processor computer system is provided with a memory controller which includes a recall unit. The recall unit allows selective forced write-backs of dirty cache lines to the home memory. After a request is posted in the recall unit, a recall (“flush”) command is issued which forces the owner cache to write-back the dirty cache line to be flushed. The memory controller will inform the recall unit as each recall operation is completed. The recall unit operation will be interrupted when all flush requests are completed.Type: GrantFiled: June 8, 2001Date of Patent: January 6, 2004Assignee: Hewlett-Packard Company, L.P.Inventors: Kenneth Mark Wilson, Fong Pong, Lance Russell, Tung Nguyen, Lu Xu
-
Patent number: 6662275Abstract: A method of maintaining coherency in a cache hierarchy of a processing unit of a computer system, wherein the upper level (L1) cache includes a split instruction/data cache. In one implementation, the L1 data cache is store-through, and each processing unit has a lower level (L2) cache. When the lower level cache receives a cache operation requiring invalidation of a program instruction in the L1 instruction cache (i.e., a store operation or a snooped kill), the L2 cache sends an invalidation transaction (e.g., icbi) to the instruction cache. The L2 cache is fully inclusive of both instructions and data. In another implementation, the L1 data cache is write-back, and a store address queue in the processor core is used to continually propagate pipelined address sequences to the lower levels of the memory hierarchy, i.e., to an L2 cache or, if there is no L2 cache, then to the system bus. If there is no L2 cache, then the cache operations may be snooped directly against the L1 instruction cache.Type: GrantFiled: February 12, 2001Date of Patent: December 9, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Guy Lynn Guthrie
-
Publication number: 20030204702Abstract: A method of operating a processing device is provided. The method includes defining an effective memory address space with a cached address space and a non-cached address space wherein the cached address space and the non-cached address space each translate to overlap a single physical memory space. Further, the method includes accessing a memory address of the physical memory space without accessing a cache memory system from the non-cached effective address space. The method also includes accessing a memory address of the physical memory space from the cached effective address space with the benefit of the cache memory system.Type: ApplicationFiled: April 30, 2002Publication date: October 30, 2003Applicant: ADC DSL Systems, Inc.Inventors: Charles Weston Lomax, Melvin Richard Phillips, Jefferson Logan Holt, James Xavier Torok
-
Patent number: 6629205Abstract: A cache memory includes a plurality of memory chips, or other separately addressable memory sections, which are configured to collectively store a plurality of cache lines. Each cache line includes data and an associated cache tag. The cache tag may include an address tag which identifies the line as well as state information indicating the coherency state for the line. Each cache line is stored across the memory chips in a row formed by corresponding entries (i.e., entries accessed using the same index address). The plurality of cache lines is grouped into separate subsets based on index addresses, thereby forming several separate classes of cache lines. The cache tags associated with cache lines of different classes are stored in different memory chips. During operation, the cache controller may receive multiple snoop requests corresponding to, for example, transactions initiated by various processors.Type: GrantFiled: February 23, 2001Date of Patent: September 30, 2003Assignee: Sun Microsystems, Inc.Inventor: Robert Cypher
-
Publication number: 20030120877Abstract: A single chip, embedded symmetric multiprocessor (ESMP) having parallel multiprocessing architecture composed of identical processors allows application developers to write code for a single processor case. This code can be ported to a multiprocessor platform with minimal changes. The application boot or kernel code must be modified to support the parallel processing platform. However, the hardware architecture allows the program stream, at the application task or process level, to be divided among different central processing units without change to the application code. The embedded symmetric processor system includes central processing units, complex memory architectures and a wide range of peripheral devices on the single integrated circuit. Such a system normally also includes an interface to large amounts of external memory. Special hardware manages central processing unit interactions with program memory, data memory and peripherals.Type: ApplicationFiled: September 27, 2002Publication date: June 26, 2003Inventor: Steven R. Jahnke
-
Patent number: 6581112Abstract: A direct memory access (DMA) receiver adapted to receive data from a source, such data to be written into a random access memory is provided. The random access memory and DMA receiver being coupled are to a central processing unit by a bus. The central processing unit is coupled to a local cache memory. The source of such data provides an address for the data, such address being the location the random access memory where the data is to be stored. The DMA receiver includes an address register, a first data register and a duplicate data register. The duplicate register has an input coupled to an output of the first data register. A selector is provided having a pair of inputs, one being coupled to the output of the first data register and another one of the pair of inputs being coupled to an output of the duplicate data register. The selector couples one of the pair of inputs to an output thereof selectively in accordance with a select signal. A state machine is included in the DMA receiver.Type: GrantFiled: March 31, 2000Date of Patent: June 17, 2003Assignee: EMC CorporationInventors: Avinash Kallat, Robert Thibault
-
Patent number: 6557082Abstract: A method, apparatus and article of manufacture for ensuring cache coherency in a database containing a data store on a central data storage device connected to a plurality of computers. When an immediate write option is set, the data in a local buffer pool changed by a first transaction on a first computer is immediately written to a group buffer pool at the central data storage device, prior to initiating a second transaction upon a second computer that relies upon the modified data. Local buffer pools are then invalidated thereby facilitating local buffer pool updates from the group buffer pool. The immediate write (IW) option may be a subsystem parameter set at a system level or a bind option set at a plan level. The immediate write option may be set so that data is written to the group buffer pool at or before a phase one commit.Type: GrantFiled: March 30, 2000Date of Patent: April 29, 2003Assignee: International Business Machines CorporationInventors: Jeffrey William Josten, James Zu-Chia Teng
-
Patent number: 6549988Abstract: A data storage system comprising a network of PCs each of which includes a cache memory, I/O channel adapter for transmitting data over the channel and a network adapter for transmitting control signals and data over the network. In one embodiment, a method for managing resources in a cache manager ensures consistency of data stored in the distributed cache. In another embodiment, a method for sharing data between two or more heterogeneous hosts including the steps of: reading a record in a format compatible with one computer; identifying a translation module with the second computer; translating the record into a format compatible with the second computer and writing said translated record into a cache memory.Type: GrantFiled: January 22, 1999Date of Patent: April 15, 2003Inventor: Ilya Gertner
-
Patent number: 6502168Abstract: According to the present invention, a data processing system includes a cache having a cache directory. A status indication indicative of the status of at least one of a plurality of data entries in the cache is stored in the cache directory. In response to receipt of a cache operation request, a determination is made whether to update the status indication. In response to the determination that the status indication is to be updated, the status indication is copied into a shadow register and updated. The status indication is then written back into the cache directory at a later time. The shadow register thus serves as a virtual cache controller queue that dynamically mimics a cache directory entry without functional latency.Type: GrantFiled: September 23, 1999Date of Patent: December 31, 2002Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis
-
Publication number: 20020188801Abstract: A data storage system and method to promote integrity of data written, optionally using a cache with buffer to persistent memory. Health of a computer system is dynamically monitored. If good health is present, during a data write operation, the cache system is configured so data is buffered within the cache, thus promoting faster performance. But if good health is absent, the cache system is configured to write through to persistent memory, thus trading off speed for integrity of the data stored. Optionally, a multi-level cache hierarchy system can be used as the caching system.Type: ApplicationFiled: March 29, 2002Publication date: December 12, 2002Applicant: Intransa, Inc., a Delaware CorporationInventor: Henry J. Green
-
Patent number: 6484242Abstract: A cache access control system for dynamically conducting specification of dedicated and common regions and thereby always conducting optimum cache coherency control. In a processor, an L1 cache including an L1 data array and a directory is provided. A plurality of L2 caches are connected to each L1 cache. The L2 caches are connected to a main memory L3. An L2 cache history manager is supplied with L2 cache status information and an L2 cache access request from L2 caches. The L2 cache history manager judges an attribute (a dedicated region or a common region) of each line of L2. On the basis of the attribute, a cache coherency manager conducts coherency control of each L2 cache by using an invalidation type protocol or an update type protocol. The attribute is judged to be the common region, only in the case where a line shared by a plurality of L2 caches in the past is canceled once by the invalidation type protocol and then accessed again.Type: GrantFiled: March 16, 2001Date of Patent: November 19, 2002Assignee: Hitachi, Ltd.Inventors: Mutsumi Hosoya, Michitaka Yamamoto
-
Publication number: 20020112124Abstract: A method of maintaining coherency in a cache hierarchy of a processing unit of a computer system, wherein the upper level (L1) cache includes a split instruction/data cache. In one implementation, the L1 data cache is store-through, and each processing unit has a lower level (L2) cache. When the lower level cache receives a cache operation requiring invalidation of a program instruction in the L1 instruction cache (i.e., a store operation or a snooped kill), the L2 cache sends an invalidation transaction (e.g., icbi) to the instruction cache. The L2 cache is fully inclusive of both instructions and data. In another implementation, the L1 data cache is write-back, and a store address queue in the processor core is used to continually propagate pipelined address sequences to the lower levels of the memory hierarchy, i.e., to an L2 cache or, if there is no L2 cache, then to the system bus. If there is no L2 cache, then the cache operations may be snooped directly against the L1 instruction cache.Type: ApplicationFiled: February 12, 2001Publication date: August 15, 2002Applicant: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Guy Lynn Guthrie
-
Patent number: 6418514Abstract: A method of avoiding deadlocks in cache coherency protocol for a multi-processor computer system, by loading a memory value into a plurality of cache blocks, assigning a first coherency state having a higher collision priority to only one of the cache blocks, and assigning one or more additional coherency states having lower collision priorities to all of the remaining cache blocks. Different system bus codes can be used to indicate the priority of conflicting requests (e.g., DClaim operations) to modify the memory value. The invention also allows folding or elimination of redundant DClaim operations, and can be applied in a global versus local manner within a multi-processor computer system having processing units grouped into at least two clusters.Type: GrantFiled: February 17, 1998Date of Patent: July 9, 2002Assignee: Internationl Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis
-
Patent number: 6415362Abstract: A method and system for performing write-through store operations of valid data of varying sizes in a data processing system, where the data processing system includes multiple processors that are coupled to an interconnect through a memory hierarchy, where the memory hierarchy includes multiple levels of cache, where at least one lower level of cache of the multiple of levels of cache requires store operations of all valid data of at least a predetermined size. First, it is determined whether or not a write-through store operation is a cache hit in a higher level of cache of the multiple levels of cache. In response to a determination that cache hit has occurred in the higher level of cache, the write-through store operation is merged with data read from the higher level of cache to provide a merged write-through operation of all valid data of at least the predetermined size to a lower level of cache.Type: GrantFiled: April 29, 1999Date of Patent: July 2, 2002Assignees: International Business Machines Corporation, Motorola, Inc.Inventors: James Nolan Hardage, Alexander Edward Okpisz, Thomas Albert Petersen
-
Patent number: 6373779Abstract: A dedicated block random access memory (RAM) is provided for a programmable logic device (PLD), such as a field programmable gate array (FPGA). The block RAM includes a memory cell array and control logic that is configurable to select one of a plurality of write modes for accessing the memory cell array. In one embodiment, the write modes include a write with write-back mode, a write without write-back mode, and a read then write mode. The control logic selects the write mode in response to configuration bits stored in corresponding configuration memory cells of the PLD. The configuration bits are programmed during configuration of the PLD. In one variation, the control logic selects the write mode in response to user signals. In a particular embodiment, the block RAM is a dual-port memory having a first port and a second port. In this embodiment, the first and second ports can be independently configured to have different (or the same) write modes.Type: GrantFiled: May 19, 2000Date of Patent: April 16, 2002Assignee: Xilinx, Inc.Inventors: Raymond C. Pang, Steven P. Young
-
Patent number: 6366984Abstract: A write combining buffer that supports snoop requests includes a first cache memory and a second cache memory. The apparatus also includes a write combining buffer, coupled to the first and second cache memories, to combine data from a plurality of store operations. Each of the plurality of store operations is to at least a part of a cache line, and the write combining buffer can be snooped in response to requests initiated external to the apparatus.Type: GrantFiled: May 11, 1999Date of Patent: April 2, 2002Assignee: Intel CorporationInventors: Douglas M. Carmean, Brent E. Lince
-
Patent number: 6360301Abstract: A lower level cache detects when a line of memory has been evicted from a higher level cache. The cache coherency protocol for the lower level cache places the line into a special state. If a line in the special state is evicted from the lower level cache, the lower level cache knows that the line is not cached at a higher level, and therefore a back-invalidate transaction is not needed. Reducing the number of back-invalidate transactions improves the performance of the system.Type: GrantFiled: April 13, 1999Date of Patent: March 19, 2002Assignee: Hewlett-Packard CompanyInventors: Blaine D Gaither, Eric M Rentschler
-
Patent number: 6351790Abstract: A cache coherency mechanism for a computer system having a plurality of processors, each for executing a sequence of instructions, at least one of the processors having a cache memory associated therewith. The computer system includes a memory that provides an address space where data items are stored for use by all of the processors. A behavior store holds in association with an address of each item, a cache behavior identifying the cacheable behavior of the item, the cacheable behaviors including a software coherent behavior and an automatically coherent behavior. When a cache coherency operation is instigated by a cache coherency instruction, the operation is effected dependent on the cacheable behavior associated with the specified address of the item. Methods for modifying the coherency status of a cache are also described.Type: GrantFiled: March 16, 1999Date of Patent: February 26, 2002Assignee: STMicroelectronics LimitedInventor: Andrew Michael Jones
-
Patent number: 6343346Abstract: A shared memory parallel processing system interconnected by a multi-stage network combines new system configuration techniques with special-purpose hardware to provide remote memory accesses across the network, while controlling cache coherency efficiently across the network. The system configuration techniques include a systematic method for partitioning and controlling the memory in relation to local verses remote accesses and changeable verses unchangeable data. Most of the special-purpose hardware is implemented in the memory controller and network adapter, which implements three send FIFOs and three receive FIFOs at each node to segregate and handle efficiently invalidate functions, remote stores, and remote accesses requiring cache coherency. The segregation of these three functions into different send and receive FIFOs greatly facilitates the cache coherency function over the network. In addition, the network itself is tailored to provide the best efficiency for remote accesses.Type: GrantFiled: March 1, 2000Date of Patent: January 29, 2002Assignee: International Business Machines CorporationInventor: Howard Thomas Olnowich
-
Patent number: 6343359Abstract: An apparatus is presented for expediting the execution of dependent micro instructions in a pipeline microprocessor having design characteristics—complexity, power, and timing—that are not significantly impacted by the number of stages in the microprocessor's pipeline. In contrast to conventional result distribution schemes where an intermediate result is distributed to multiple pipeline stages, the present invention provides a cache for storage of multiple intermediate results. The cache is accessed by a dependent micro instruction to retrieve required operands. The apparatus includes a result forwarding cache, result update logic, and operand configuration logic. The result forwarding cache stores the intermediate results. The result update logic receives the intermediate results as they are generated and enters the intermediate results into the result forwarding cache.Type: GrantFiled: May 18, 1999Date of Patent: January 29, 2002Assignee: IP-First, L.L.C.Inventors: Gerard M. Col, G. Glenn Henry
-
Publication number: 20020002656Abstract: An information processing system and a multi-level hierarchical storage device for use in the information processing system having a plurality of instruction processors and a plurality of main storage devices. The multi-level hierarchical storage device includes a first-cache storage device of a write-through type provided for each instruction processor, a second-cache storage device of a write-back type provided for each main storage device, and a third-cache storage device of a write-through type provided between the first-cache storage device and the second-cache storage device.Type: ApplicationFiled: January 29, 1998Publication date: January 3, 2002Inventors: ICHIKI HONMA, HIROSHI KUROKAWA, TOSHIAKI KAWAMURA, EIJI NOMURA
-
Publication number: 20010025333Abstract: An integrated circuit memory device incorporating a non-volatile memory array and a relatively faster access time memory cache integrated monolithically therewith improves the overall access time in page and provides faster cycle time for read operations. In a particular embodiment, the cache may be provided as static random access memory (“SRAM”) and the non-volatile memory array provided as ferroelectric random access memory wherein on a read, the row is cached and the write back cycle is started allowing subsequent in page reads to occur very quickly. If in page accesses are sufficient the memory array precharge may be hidden and writes can occur utilizing write back or write through caching.Type: ApplicationFiled: May 24, 2001Publication date: September 27, 2001Inventors: Craig Taylor, Donald G. Carrigan, Mike Alwais
-
Patent number: 6279085Abstract: A method for avoiding livelocks due to colliding writebacks within a NUMA computer system is disclosed. The NUMA computer system includes at least two nodes coupled to an interconnect. Each of the two nodes includes a local system memory. In response to an attempt by a processor located at a home node to access a modified cache line at a remote node via a memory request at substantially the same time when a processor located at the remote node attempts to writeback the modified cache line to the home node, the writeback is allowed to complete at the home node without retry only if the writeback is from what a coherency directory within the home node considered as an owning node of the modified cache line. The memory request is then allowed to retry and completed at the home node.Type: GrantFiled: February 26, 1999Date of Patent: August 21, 2001Assignee: International Business Machines CorporationInventors: Gary Dale Carpenter, David Brian Glasco
-
Patent number: 6275908Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a system memory, a plurality of processors, and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the address tag is valid and that the first data item is invalid. If, while the coherency indicator is set to the first state, the first cache detects a data transfer on the interconnect associated with the address indicated by the address tag, where the data transfer includes a second data item that is modified with respect to a corresponding data item in the system memory, the second data item is stored in the first cache in association with the address tag.Type: GrantFiled: February 17, 1998Date of Patent: August 14, 2001Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis
-
Patent number: 6272603Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a system memory, a plurality of processors, and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the address tag is valid and that the first data item is invalid. If, while the coherency indicator is set to the first state, the first cache receives a data transfer on the interconnect associated with the address indicated by the address tag, where the data transfer includes a second data item that is modified with respect to a corresponding data item in the system memory, the second data item is stored in the first cache in association with the address tag.Type: GrantFiled: February 17, 1998Date of Patent: August 7, 2001Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis
-
Patent number: 6260119Abstract: Isochronous information is transferred between an IO device and a first buffer (N) of a plurality of buffers in a system memory. The isochronous information stored in the plurality of buffers is also stored in a memory cache accessible to a system processor. The state of the memory cache is managed according to an isochronous “X-T” contract that is independent of the “X-T” contact with which data are moved between the IO device and system memory. Further, data associated with a given buffer are moved into and out of the memory cache substantially simultaneously with the transfer of isochronous information between the IO device and other buffers in the system memory.Type: GrantFiled: December 29, 1998Date of Patent: July 10, 2001Assignee: Intel CorporationInventors: John I. Garney, Brent S. Baxter
-
Patent number: 6199143Abstract: A method and apparatus in a computer system selectively stores CPU state related information in parallel in a first and a second set of registers. The two sets of registers can selectively transfer data in parallel therebetween to restore the CPU state related information used by the CPU. The second set of registers cm be organized in a cascaded structure or in selective banks of registers to keep track of multiple CPU state related information such as during nested interrupts. The second set of registers can tansfer data with a third data storage device asynchronously to the operation of the CPU.Type: GrantFiled: November 26, 1997Date of Patent: March 6, 2001Assignee: International Business Machines CorporationInventor: Edward Robert Segal
-
Patent number: 6138217Abstract: In a data processing system where a plurality of nodes, each having a plurality of processors and cache memories associated with each of the processors, are connected via a bus, tag information is added to each data block stored in the cache memories. The tag information has state information which includes information (INTERNODE-SHARED) indicative of whether or not the data block is cached in another node. When a write-access is transmitted to the data block in the cache memory, if the state information added to the data block is INTERNODE-SHARED, invalidation of the data block is requested to the other node.Type: GrantFiled: November 21, 1997Date of Patent: October 24, 2000Assignee: Canon Kabushiki KaishaInventor: Kazumasa Hamaguchi
-
Patent number: 6138124Abstract: In a distributed computing system in which replicas of a document are separately stored and revised, the document containing data arranged in a number of fields, a method for replicating data contained in a revised document replica to the other of the replicas by replicating only the field or fields which have been revised since an earlier replication. The method includes the steps of dynamically maintaining a two byte document sequence number for each of the document replicas representing the number of revisions made to the replicas, and dynamically maintaining a one byte field sequence number for each of the fields in the replicas. The field sequence numbers for revised fields are set equal to the lower byte of the current document sequence number.Type: GrantFiled: June 9, 1998Date of Patent: October 24, 2000Assignee: International Business MachinesInventor: Steven R. Beckhardt
-
Patent number: 6134634Abstract: A microprocessor preemptively write-backs dirty entries of an internal cache. Each cache entry is checked once each predetermined time period to determine if the cache entry is dirty. If dirty, a write history is checked to determine if the cache entry is stale. If stale, the cache entry in preemptively written back to main memory and then marked as clean. The write history includes a count of the number of consecutive predetermined time periods during which there is no write to the cache entry. The cache entry is stale if the count exceeds a predetermined number. For each check of the write history the nonwrite count is incremented if the cache entry has been written to during the prior cycle and decremented if not. Alternatively, the nonwrite count is set to zero if the cache entry has been written to. The dirty cache entry may be marked as clean upon copying to the write-back buffer or, alternatively, when the write-back buffer writes the dirty cache entry to the main memory.Type: GrantFiled: December 19, 1997Date of Patent: October 17, 2000Assignee: Texas Instruments IncorporatedInventors: Robert D. Marshall, Jr., Jonathan H. Shiell
-
Patent number: 6101582Abstract: Depending on a processor or instruction mode, a data cache block store (dcbst) or equivalent instruction is treated differently. A coherency maintenance mode for the instruction, in which the instruction is utilized to maintain coherency between bifurcated data and instruction caches, may be entered by setting bits in a processor register or by setting hint bits within the instruction. In the coherency maintenance mode, the instruction both pushes modified data to system memory and invalidates the cache entry in instruction caches. Subsequent instruction cache block invalidate (icbi) or equivalent instructions targeting the same cache location are no-oped when issued by a processor following a data cache block store or equivalent instruction executed in coherency maintenance mode. Execution of the data cache clock store instruction in coherency maintenance mode results in a novel system bus operation being initiated on the system bus.Type: GrantFiled: February 17, 1998Date of Patent: August 8, 2000Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Jerry Don Lewis