Cross-interrogating Patents (Class 711/124)
-
Patent number: 7444474Abstract: An information carrier medium containing software that, when executed by a processor, causes the processor to receive status information from circuit logic that collects the status information from caches associated with different processor cores. The software also causes the processor to provide the information to a user of the software. The status information indicates whether one of the caches comprises an entry associated with a virtual address.Type: GrantFiled: May 15, 2006Date of Patent: October 28, 2008Assignee: Texas Instruments IncorporatedInventors: Oliver P. Sohm, Brian Cruickshank
-
Publication number: 20080256298Abstract: Apparatus and methods for storing user data for use in real-time communications (e.g., IM or VoIP) are provided. The apparatus comprises at least a first cache device (e.g., a cache server) and a second cache device for storing user data, wherein the user data stored with the first cache device is mirrored with the second cache device. The apparatus further comprising a server having logic for causing access to the user data (e.g., to respond to or process messages) from the first cache device, if accessible, and from the second cache device if the user data is not accessible form the first cache device. The apparatus may further include logic for causing user data to be restored to the first cache device from the second cache device if the first cache device loses user data (e.g., if the first cache device goes down).Type: ApplicationFiled: April 10, 2007Publication date: October 16, 2008Applicant: Yahoo! Inc.Inventors: Ming J. Lu, Rajanikanth Vemulapalli, Alan S. Li
-
Publication number: 20080256299Abstract: A system and method for maintaining consistency in a system where multiple copies of an object may exist is provided for maintaining consistent copies. Consistency is maintained using a plurality of consistency policies in which at least one consistency policy results in different performance than a second consistency policy. A consistency policy is selected from the plurality consistency policies for each object to improve system performance.Type: ApplicationFiled: June 20, 2008Publication date: October 16, 2008Inventors: Arun Kwangil Iyengar, Richard P. King, Lakshmish Macheeri Ramaswamy, Daniela Rosu, Karen Witting
-
Patent number: 7421538Abstract: A storage control apparatus controls physical disks according to the host access using a pair of controllers, while mirroring processing is decreased when data is written to a cache memory and high-speed operation is enabled. The mirror management table is created with allocating the mirror area of the cache memory of the other controller, and acquisition of a mirror page of the cache memory of the other controller is executed referring to the mirror management table without an exchange of mirror page acquisition messages between the controllers.Type: GrantFiled: November 19, 2003Date of Patent: September 2, 2008Assignee: Fujitsu LimitedInventors: Joichi Bita, Daiya Nakamura
-
Patent number: 7409504Abstract: A method for sequentially coupling successive processor requests for a cache line before the data is received in the cache of a first coupled processor. Both homogenous and non-homogenous operations are chained to each other, and the coherency protocol includes several new intermediate coherency responses associated with the chained states. Chained coherency states are assigned to track the chain of processor requests and the grant of access permission prior to receipt of the data at the first processor. The chained coherency states also identify the address of the receiving processor. When data is received at the cache of the first processor within the chain, the processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chained coherency protocol frees up address bus bandwidth by reducing the number of retries.Type: GrantFiled: October 6, 2005Date of Patent: August 5, 2008Assignee: International Business Machines CorporationInventors: Ramakrishnan Rajamony, Hazim Shafi, Derek Edward Williams, Kenneth Lee Wright
-
Patent number: 7404046Abstract: A cache coherent data processing system includes at least a first cache memory supporting a first processing unit and a second cache memory supporting a second processing unit. The first cache memory includes a cache array and a cache directory of contents of the cache array. In response to the first cache memory detecting on an interconnect a broadcast operation that specifies a request address, the first cache memory determines from the operation a type of the operation and a coherency state associated with the request address. In response to determining the type and the coherency state, the first cache memory filters out the broadcast operation without accessing the cache directory.Type: GrantFiled: February 10, 2005Date of Patent: July 22, 2008Assignee: International Business Machines CorporationInventors: Benjiman L. Goodman, Guy L. Guthrie, William J. Starke, Derek E. Williams
-
Publication number: 20080126710Abstract: The present invention discloses a method for processing cache data, which is used in a dual redundant server system having a console end and a redundant control end. The console end mirrors a cache data saved in the console end into a mirrored cache data and sends the mirrored cache data to the redundant control end through a transmission unit. If the console end determines that the redundant control end cannot save the mirrored cache data, the console end will flush the cache data into a hard disk installed at the console end.Type: ApplicationFiled: November 29, 2006Publication date: May 29, 2008Applicant: INVENTEC CORPORATIONInventor: Chih-Wei Chen
-
Publication number: 20080120467Abstract: An information processing apparatus includes: a main memory that stores data; a plurality of processors each provided with a primary cache memory; a secondary cache memory that is provided between the main memory and the processors, the secondary cache memory having larger capacity than the primary cache memory; and a cache controller that performs cache search on the secondary cache memory based on a second index uniquely generated by joining: 1) a bit string having a predetermined bit length; and 2) a first index that is included in a data access command transmitted from any one of the processors, the first index being used for performing cache search on the primary cache memory.Type: ApplicationFiled: July 20, 2007Publication date: May 22, 2008Inventor: Shigehiro Asano
-
Patent number: 7370155Abstract: A method and data processing system for sequentially coupling successive, homogenous processor requests for a cache line in a chain before the data is received in the cache of a first processor within the chain. Chained intermediate coherency states are assigned to track the chain of processor requests and subsequent access permission provided, prior to receipt of the data at the first processor starting the chain. The chained intermediate coherency state assigned identifies the processor operation and a directional identifier identifies the processor to which the cache line is to be forwarded. When the data is received at the cache of the first processor within the chain, the first processor completes its operation on (or with) the data and then forwards the data to the next processor in the chain. The chain is immediately stopped when a non-homogenous operation is snooped by the last-in-chain processor.Type: GrantFiled: October 6, 2005Date of Patent: May 6, 2008Assignee: International Business Machines CorporationInventors: Ramakrishnan Rajamony, Hazim Shafi, Derek Edward Williams, Kenneth Lee Wright
-
Patent number: 7305522Abstract: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss. In an alternate embodiment, direct intervention is utilized to access a same-level victim cache.Type: GrantFiled: February 12, 2005Date of Patent: December 4, 2007Assignee: International Business Machines CorporationInventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, Bradley David McCredie, William John Starke
-
Patent number: 7305523Abstract: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss.Type: GrantFiled: February 12, 2005Date of Patent: December 4, 2007Assignee: International Business Machines CorporationInventors: Guy Lynn Guthrie, William John Starke, Derek Edward Williams
-
Patent number: 7293142Abstract: Systems, methods, apparatus and software can be implemented to detect memory leaks with relatively high confidence. By analyzing memory blocks stored in a memory, implicit and/or explicit contingency chains can be obtained. Analysis of these contingency chains identifies potential memory leaks, and subsequent verification confirms whether the potential memory leaks are memory leaks.Type: GrantFiled: April 19, 2004Date of Patent: November 6, 2007Assignee: Cisco Technology, Inc.Inventors: Jun Xu, Xiangrong Wang, Christopher H. Pham, Srinivas Goli
-
Patent number: 7287122Abstract: A method of managing a distributed cache structure having separate cache banks, by detecting that a given cache line has been repeatedly accessed by two or more processors which share the cache, and replicating that cache line in at least two separate cache banks. The cache line is optimally replicated in a cache bank having the lowest latency with respect to the given accessing processor. A currently accessed line in a different cache bank can be exchanged with a cache line in the cache bank with the lowest latency, and another line in the cache bank with lowest latency is moved to the different cache bank prior to the currently accessed line being moved to the cache bank with the lowest latency. Further replication of the cache line can be disabled when two or more processors alternately write to the cache line.Type: GrantFiled: October 7, 2004Date of Patent: October 23, 2007Assignee: International Business Machines CorporationInventors: Ramakrishnan Rajamony, Xiaowei Shen, Balaram Sinharoy
-
Patent number: 7266644Abstract: A storage system conducting remote copy functions such that, when data is updated at a local site, contents of the update can be referred to in real time by storage at a remote site. A disk-control unit at a remote site receives file data written in accordance with an update of a file in a storage system at a local site and a history of the file-management information from the storage system at the local site and stores the data and the history. A file-system processing unit refers to the history and updates the file-management information in a file-system cache in accordance with the update of the file in the storage system at the local site. When a client issues a read request, the file-system processing unit refers to the file-management information updated in the file-system cache and transfers the contents of the update of the file to the client.Type: GrantFiled: January 29, 2004Date of Patent: September 4, 2007Assignee: Hitachi, Ltd.Inventors: Yoji Nakatani, Manabu Kitamura
-
Patent number: 7222220Abstract: A multiprocessor computer system is configured to selectively transmit address transactions through an address network using either a broadcast mode or a point-to-point mode transparent to the active devices that initiate the transactions. Depending on the mode of transmission selected, either a directory-based coherency protocol or a broadcast snooping coherency protocol is implemented to maintain coherency within the system. A computing node is formed by a group of clients which share a common address and data network. The address network is configured to determine whether a particular transaction is to be conveyed in broadcast mode or point-to-point mode. In one embodiment, the address network includes a mode table with entries which are configurable to indicate transmission modes corresponding to different regions of the address space within the node.Type: GrantFiled: June 23, 2003Date of Patent: May 22, 2007Assignee: Sun Microsystems, Inc.Inventors: Robert E. Cypher, Ashok Singhal
-
Patent number: 7197602Abstract: The invention provides a method and system for operating multiple communicating caches. Between caches, unnecessary transmission of repeated information is substantially reduced. Each cache maintains information to improve the collective operation of the system of multiple communicating caches. This can include information about the likely contents of each other cache, or about the behavior of client devices or server devices coupled to other caches in the system. Pairs of communicating caches substantially compress transmitted information. This includes both reliable compression, in which the receiving cache can reliably identify the compressed information in response to the message, and unreliable compression, in which the receiving cache will sometimes be unable to identify the compressed information. A first cache refrains from unnecessarily transmitting the same information to a second cache when each already has a copy.Type: GrantFiled: March 30, 2004Date of Patent: March 27, 2007Assignee: Blue Coat Systems, Inc.Inventor: Michael Malcolm
-
Patent number: 7159077Abstract: A computer system has a plurality of processors in a multiprocessor system with each processor associated with a cache memory. The cache traffic is monitored by the respective processors to determine the load for each of the cache memories. Signals corresponding to the cache loads are generated and analyzed. A target processor is selected for a push data operation from a bus agent to the cache memory using the load information. The push operations to the caches are optimized based on the cache traffic information.Type: GrantFiled: June 30, 2004Date of Patent: January 2, 2007Assignee: Intel CorporationInventors: Steven J. Tu, Samantha J. Edirisooriya, Sujat Jamil, David E. Miner, R. Frank O'Bleness, Hang T. Nguyen
-
Patent number: 7130961Abstract: To execute cache data identity control between disk controllers in plural disk controllers provided with each cache. To prevent a trouble from being propagated to another disk controller even if the trouble occurs in a specific disk controller. The identity control of data is executed via a communication means between disk controllers. In case update access from a host is received, data in a cache memory of a disk controller that controls at least a data storage drive is updated. It is desirable that a cache area is divided into an area for a drive controlled by the disk controller and an area for a drive controlled by another disk controller and is used.Type: GrantFiled: February 20, 2001Date of Patent: October 31, 2006Assignee: Hitachi, Ltd.Inventors: Hiroki Kanai, Kazuhisa Fujimoto, Akira Fujibayashi
-
Patent number: 7120755Abstract: Cache coherency is maintained between the dedicated caches of a chip multiprocessor by writing back data from one dedicated cache to another without routing the data off-chip. Various specific embodiments are described, using write buffers, fill buffers, and multiplexers, respectively, to achieve the on-chip transfer of data between dedicated caches.Type: GrantFiled: January 2, 2002Date of Patent: October 10, 2006Assignee: Intel CorporationInventors: Sujat Jamil, Quinn W. Merrell, Cameron B. McNairy
-
Patent number: 7076613Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.Type: GrantFiled: January 21, 2004Date of Patent: July 11, 2006Assignee: Intel CorporationInventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
-
Patent number: 7055003Abstract: A method of reducing errors in a cache memory of a computer system (e.g., an L2 cache) by periodically issuing a series of purge commands to the L2 cache, sequentially flushing cache lines from the L2 cache to an L3 cache in response to the purge commands, and correcting errors (single-bit) in the cache lines as they are flushed to the L3 cache. Purge commands are issued only when the processor cores associated with the L2 cache have an idle cycle available in a store pipe to the cache. The flush rate of the purge commands can be programmably set, and the purge mechanism can be implemented either in software running on the computer system, or in hardware integrated with the L2 cache. In the case of the software, the purge mechanism can be incorporated into the operating system. In the case of hardware, a purge engine can be provided which advantageously utilizes the store pipe that is provided between the L1 and L2 caches.Type: GrantFiled: April 25, 2003Date of Patent: May 30, 2006Assignee: International Business Machines CorporationInventors: Robert Alan Cargnoni, Guy Lynn Guthrie, Harmony Lynn Helterhoff, Kevin Franklin Reick
-
Patent number: 7032078Abstract: A multiprocessor computer system to selectively transmit address transactions using a broadcast mode or a point-to-point mode. Either a directory-based coherency protocol or a broadcast snooping coherency protocol is implemented to maintain coherency. A node is formed by a group of clients which share a common address and data network. The address network determines whether a transaction is conveyed in broadcast mode or point-to-point mode. The address network includes a table with entries which indicate transmission modes corresponding to different regions of the address space within the node. Upon receiving a coherence request transaction, the address network may access the table to determine the transmission mode which corresponds to the received transaction. Network congestion may be monitored and transmission modes adjusted accordingly. When network utilization is high, the number of transactions which are broadcast may be reduced.Type: GrantFiled: May 1, 2002Date of Patent: April 18, 2006Assignee: Sun Microsystems, Inc.Inventors: Robert Cypher, Ashok Singhal
-
Patent number: 6993633Abstract: A cache data control system and method for a computer system in which in a memory read processing, a coherent controller issues an advanced speculative read request for (speculatively) reading data from a cache data section in advance to a cache data controller, before reading a cache tag from a cache tag section and conducting cache hit check. If a cache hit has occurred, the cache data controller returns the data subjected to speculative reading as response data, at the time when the cache data controller has received a read request issued by the coherent controller.Type: GrantFiled: July 28, 2000Date of Patent: January 31, 2006Assignee: Hitachi, Ltd.Inventors: Tadayuki Sakakibara, Isao Ohara, Hideya Akashi, Yuji Tsushima, Satoshi Muraoka
-
Patent number: 6970982Abstract: A method and system for attached processing units accessing a shared memory in an SMP system. In one embodiment, a system comprises a shared memory. The system further comprises a plurality of processing elements coupled to the shared memory. Each of the plurality of processing elements comprises a processing unit, a direct memory access controller and a plurality of attached processing units. Each direct memory access controller comprises an address translation mechanism thereby enabling each associated attached processing unit to access the shared memory in a restricted manner without an address translation mechanism. Each attached processing unit is configured to issue a request to an associated direct memory access controller to access the shared memory specifying a range of addresses to be accessed as virtual addresses. The associated direct memory access controller is configured to translate the range of virtual addresses into an associated range of physical addresses.Type: GrantFiled: October 1, 2003Date of Patent: November 29, 2005Assignee: International Business Machines CorporationInventors: Erik R. Altman, Peter G. Capek, Michael Karl Gschwind, Harm Peter Hofstee, James Allan Kahle, Ravi Nair, Sumedh Wasudeo Sathaye, John-David Wellman
-
Patent number: 6963953Abstract: It assumes that “SO” represents a state, in which that data in a responsible region storing the data to be accessed most frequently by the corresponding processor is updated in a cache memory to be controlled by a cache device, and other data is stored in other cache memory. In this case, one or more cache devices controlling each of the remaining cache memories changes the state of the data in a region other than the responsible region of the cache memory controlled by itself from “SN” to “I” (invalid). Therefore, in the case where the data designated by the same address is shared by the plurality of cache memories, the data can be invalidated in the cache memories other than the cache memory corresponding to the processor including the designated address in its own responsible region. Therefore, a data sharing rate can be low.Type: GrantFiled: August 8, 2002Date of Patent: November 8, 2005Assignee: Renesas Technology Corp.Inventor: Masami Nakajima
-
Patent number: 6950908Abstract: The processors #0 to #3 execute a plurality of threads whose execution sequence is defined, in parallel. When the processor #1 that executes a thread updates the self-cache memory #1, if the data of the same address exists in the cache memory #2 of the processor #2 that executes a child thread, it updates the cache memory #2 simultaneously, but even if it exists in the cache memory #0 of the processor #0 that executes a parent thread, it doesn't rewrite the cache memory #0 but only records that rewriting has been performed in the cache memory #1. When the processor #0 completes a thread, a cache line with the effect that the data has been rewritten recorded from a child thread may be invalid and a cache line without such record is judged to be effective. Whether a cache line which may be invalid is really invalid or effective is examined during execution of the next thread.Type: GrantFiled: July 10, 2002Date of Patent: September 27, 2005Assignee: NEC CorporationInventors: Atsufumi Shibayama, Satoshi Matsushita
-
Patent number: 6889293Abstract: A set of predicted readers are determined for a data block subject to a write request in a shared-memory multiprocessor system by first determining a current set of readers of the data block, and then generating the set of predicted readers based on the current set of readers and at least one additional set of readers representative of at least a portion of a global history of a directory associated with the data block. In one possible implementation, the set of predicted readers are generated by applying a function to the current set of readers and one or more additional sets of readers.Type: GrantFiled: June 9, 2000Date of Patent: May 3, 2005Assignee: Agere Systems Inc.Inventors: Stefanos Kaxiras, Reginald Clifford Young
-
Patent number: 6886162Abstract: A high-speed method for maintaining a summary of thread activity reduces the number of remote-memory operations for an n processor, multiple node computer system from n2 to (2n?1) operations. The method uses a hierarchical summary of-thread-activity data structure that includes structures such as first and second level bit masks. The first level bit mask is accessible to all nodes and contains a bit per node, the bit indicating whether the corresponding node contains a processor that has not yet passed through a quiescent state. The second level bit mask is local to each node and contains a bit per processor per node, the bit indicating whether the corresponding processor has not yet passed through a quiescent state. The method includes determining from a data structure on the processor's node (such as a second level bitmask) if the processor has passed through a quiescent state. If so, it is then determined from the data structure if all other processors on its node have passed through a quiescent state.Type: GrantFiled: July 31, 1998Date of Patent: April 26, 2005Assignee: International Business Machines CorporationInventor: Paul E. McKenney
-
Patent number: 6877067Abstract: In a multiprocessor system in which a plurality of processors share an n-way set-associative cache memory, a plurality of ways of the cache memory are divided into groups, one group for each processor. When a miss-hit occurs in the cache memory, one way is selected for replacement from the ways belonging to the group corresponding to the processor that made a memory access but caused the miss-hit. When there is an off-line processor, the ways belonging to that processor are re-distributed to the group corresponding to an on-line processor to allow the on-line processor to use those ways.Type: GrantFiled: June 12, 2002Date of Patent: April 5, 2005Assignee: NEC CorporationInventor: Shinya Yamazaki
-
Patent number: 6871268Abstract: Techniques for improved cache management including cache replacement are provided. In one aspect, a distributed caching technique of the invention comprises the use of a central cache and one or more local caches. The central cache communicates with the one or more local caches and coordinates updates to the local caches, including cache replacement. The invention also provides techniques for adaptively determining holding times associated with data storage applications such as those involving caches.Type: GrantFiled: March 7, 2002Date of Patent: March 22, 2005Assignee: International Business Machines CorporationInventors: Arun Kwangil Iyengar, Isabelle M. Rouvellou
-
Patent number: 6868482Abstract: Each dual multi-processing system has a number of processors, with each processor having a store in first-level write through cache to a second-level cache. A third-level memory is shared by the dual system with the first-level and second-level caches being globally addressable to all of the third-level memory. Processors can write through to the local second-level cache and have access to the remote second-level cache via the local storage controller. A coherency scheme for the dual system provides each second-level cache with indicators for each cache line showing which ones are valid and which ones have been modified or are different than what is reflected in the corresponding third level memory. The flush apparatus uses these two indicators to transfer all cache lines that are within the remote memory address range and have been modified, back to the remote memory prior to dynamically removing the local cache resources due to either system maintenance or dynamic partitioning.Type: GrantFiled: February 17, 2000Date of Patent: March 15, 2005Assignee: Unisys CorporationInventors: Donald W. Mackenthun, Mitchell A. Bauman, Donald C. Englin
-
Patent number: 6865645Abstract: A method of supporting programs that include instructions that modify subsequent instructions in a multi-processor system with a central processing unit including an execution unit, and instruction unit and a plurality of caches including a separate instruction and operand cache.Type: GrantFiled: October 2, 2000Date of Patent: March 8, 2005Assignee: International Business Machines CorporationInventors: Chung-Lung Kevin Shum, Dean G. Bair, Charles F. Webb, Mark A. Check, John S. Liptay
-
Patent number: 6836826Abstract: A multilevel cache system and method. A first data array and a second data array are coupled to a merged tag array. The merged tag array stores tags for both the first data array and second data array.Type: GrantFiled: June 23, 2003Date of Patent: December 28, 2004Assignee: Intel CorporationInventor: Vinod Sharma
-
Patent number: 6826656Abstract: A method and system for reducing power in a snooping cache based environment. A memory may be coupled to a plurality of processing units via a bus. Each processing unit may comprise a cache controller coupled to a cache associated with the processing unit. The cache controller may comprise a segment register comprising N bits where each bit in the segment register may be associated with a segment of memory divided into N segments. The cache controller may be configured to snoop a requested address on the bus. Upon determining which bit in the segment register is associated with the snooped requested address, the segment register may determine if the bit associated with the snooped requested address is set. If the bit is not set, then a cache search may not be performed thereby mitigating the power consumption associated with a snooped request cache search.Type: GrantFiled: January 28, 2002Date of Patent: November 30, 2004Assignee: International Business Machines CorporationInventors: Victor Roberts Augsburg, James Norris Dieffenderfer, Bernard Charles Drerup, Richard Gerard Hofmann, Thomas Andrew Sartorius, Barry Joe Wolford
-
Patent number: 6820182Abstract: A memory exhaustion condition is handled in a data processing system having first and second regions of physical memory. The memory exhaustion condition is detected while the second region is mirroring at least part of the first region. In response to the memory exhaustion condition, memory mirroring is at least partially deactivated and at least part of the second region is utilized to augment the first region, such that the memory exhaustion condition is eliminated. In an illustrative embodiment, the data processing system compresses real memory into the first region of physical memory, and the memory exhaustion condition arises when the first region lacks sufficient available capacity to accommodate current requirements for real memory. The memory exhaustion condition is eliminated by compressing at least part of the real memory into the second region.Type: GrantFiled: October 18, 2000Date of Patent: November 16, 2004Assignee: International Business Machines CorporationInventors: Charles David Bauman, Richard Bealkowski, Thomas J Clement, Jerry William Pearce, Michael Robert Turner
-
Patent number: 6807606Abstract: A system and method are disclosed, according to which, the responsiveness of client/server-based distributed web applications operating in an object-oriented environment may be improved by coordinating execution of cacheable entries among a group of web servers, operably coupled in a network. In an exemplary embodiment, entries are considered to be either commands or Java Server Pages (JSPs), and the system and method are implemented by defining a class of objects (i.e., CacheUnits) to manage the caching of entries. An entry must be executed before it can be stored in a cache. Since this is computationally costly, each cacheable entry has an associated coordinating CacheUnit, which sees to it that only one CacheUnit executes an entry. Once the entry has been executed, a copy of it resides in the cache of the coordinating CacheUnit, from which it can be accessed by other CacheUnits without having to re-execute it.Type: GrantFiled: December 18, 2000Date of Patent: October 19, 2004Assignee: International Business Machines Corp.Inventors: George P. Copeland, Michael H. Conner, Gregory A. Flurry
-
Patent number: 6801984Abstract: A method, system, and processor cache configuration that enables efficient retrieval of valid data in response to an invalidate cache miss at a local processor cache. A cache directory is provided a set of directional bits in addition to the coherency state bits and the address tag. The directional bits provide information that includes a processor cache identification (ID) and routing method. The processor cache ID indicates which processor's operation resulted in the cache line of the local processor changing to the invalidate (I) coherency state. The routing method indicates what transmission method to utilize to forward the cache line, from among a local system bus or a switch or broadcast mechanism. Processor/Cache directory logic provide responses to requests depending on the values of the directional bits.Type: GrantFiled: June 29, 2001Date of Patent: October 5, 2004Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Guy Lynn Guthrie, Jerry Don Lewis
-
Patent number: 6775749Abstract: A computer system may include several caches that are each coupled to receive data from a shared memory. A cache coherency mechanism may be configured to receive a cache fill request, and in response, to send a probe to determine whether any of the other caches contain a copy of the requested data. Some time after sending the probe, the cache controller may provide a speculative response to the cache fill request to the requesting device. By delaying providing the speculative response until some time after the probes are sent, it may become more likely that the responses to the probes will be received in time to validate the speculative response.Type: GrantFiled: January 29, 2002Date of Patent: August 10, 2004Assignee: Advanced Micro Devices, Inc.Inventors: Dan S. Mudgett, Mark T. Fox
-
Patent number: 6766429Abstract: An architecture, method and apparatus for a data processing system having memory compression and two common memories forming either a single unified memory, or a dual memory system capable of continuous operation in the presence of a hardware failure or redundant “duplex” computer maintenance outage, without the cost of duplicating the memory devices. A memory controller employs hardware memory compression to reduce the memory requirement by half, which compensates for the doubling of the memory needed for the redundant storage. The memory controller employs error detection and correction code that is used to detect storage subsystem failure during read accesses. Upon detection of a fault, the hardware automatically reissues the read access to a separate memory bank that is logically identical to the faulty bank.Type: GrantFiled: August 31, 2000Date of Patent: July 20, 2004Assignee: International Business Machines CorporationInventors: Patrick Maurice Bland, Thomas Basil Smith, III, Robert Brett Tremaine, Michael Edward Wazlowski
-
Patent number: 6763436Abstract: A data replication system is disclosed in which replication functionalities between a host computer, an interconnecting computer network, and a plurality of storage devices are separated into host elements and a plurality of storage elements. The host computer is connected to one or more host elements. The host element is responsible for replicating data between the storage devices, which are each connected to an associated storage element, and for maintaining data consistency. Further, the host element instructs a storage element whose associated storage device does not contain up-to-date data to recover from another one of the plurality of storage elements and its associated storage device. The storage elements and their associated storage devices may be located in any combination of diverse or same geographical sites in a manner to ensure sufficient replication in the event of a site or equipment failure.Type: GrantFiled: January 29, 2002Date of Patent: July 13, 2004Assignee: Lucent Technologies Inc.Inventors: Eran Gabber, Bruce Kenneth Hillyer, Wee Teck Ng, Banu Rahime Ozden, Elizabeth Shriver
-
Patent number: 6754782Abstract: A non-uniform memory access (NUMA) computer system includes a first node and a second node coupled by a node interconnect. The second node includes a local interconnect, a node controller coupled between the local interconnect and the node interconnect, and a controller coupled to the local interconnect. In response to snooping an operation from the first node issued on the local interconnect by the node controller, the controller signals acceptance of responsibility for coherency management activities related to the operation in the second node, performs coherency management activities in the second node required by the operation, and thereafter provides notification of performance of the coherency management activities.Type: GrantFiled: June 21, 2001Date of Patent: June 22, 2004Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, James Stephen Fields, Jr.
-
Patent number: 6751705Abstract: A method and apparatus for purging data from a middle cache level without purging the corresponding data from a lower cache level (i.e., a cache level closer to the processor using the data), and replacing the purged first data with other data of a different memory address than the purged first data, while leaving the data of the first cache line in the lower cache level. In some embodiments, in order to allow such mid-level purging, the first cache line must be in the “shared state” that allows reading of the data, but does not permit modifications to the data (i.e., modifications that would have to be written back to memory). If it is desired to modify the data, a directory facility will issue a purge to all caches of the shared-state data for that cache line, and then the processor that wants to modify the data will request an exclusive-state copy to be fetched to its lower-level cache and to all intervening levels of cache.Type: GrantFiled: August 25, 2000Date of Patent: June 15, 2004Assignee: Silicon Graphics, Inc.Inventors: Doug Solomon, David M. Perry, Givargis G. Kaldani
-
Patent number: 6738870Abstract: A high speed remote storage controller system for a computer system has cluster nodes of symmetric multiprocessors. A plurality of clusters of symmetric multiprocessors each of has a plurality of processors, a shared cache memory, a plurality of I/O adapters and a main memory accessible from the cluster. Each cluster has an interface for passing data between cluster nodes of the symmetric multiprocessor system. Each cluster has a local interface and interface controller. The system provides one or more remote storage controllers each having a local interface controller and a local-to-remote data bus. A remote resource manager manages the interface between clusters of symmetric multiprocessors. The remote store controller is responsible for processing data accesses across a plurality of clusters and processes data storage operations involving shared memory.Type: GrantFiled: December 22, 2000Date of Patent: May 18, 2004Assignee: International Business Machines CorporationInventors: Gary A. Van Huben, Michael A. Blake, Pak-Kin Mak
-
Patent number: 6738871Abstract: A remote resource management system for managing resources in a symmetrical multiprocessing environment having a plurality of clusters of symmetric multiprocessors each of which provides interfaces between cluster nodes of the symmetric multiprocessor system with a local interface and an interface controller. One or more remote storage controllers each has a local interface controller and a local-to-remote data bus. A remote fetch controller is responsible for processing data accesses in accordance with the methods described.Type: GrantFiled: December 22, 2000Date of Patent: May 18, 2004Assignee: International Business Machines CorporationInventors: Gary A. Van Huben, Michael A. Blake, Pak-Kin Mak, Adrian Eric Seigler
-
Patent number: 6738872Abstract: A remote resource management system for managing resources in a symmetrical multiprocessing environment having a plurality of clusters of symmetric multiprocessors each of which provides interfaces between cluster nodes of the symmetric multiprocessor system with a local interface and an interface controller. One or more remote storage controllers each has a local interface controller and a local-to-remote data bus. A remote fetch controller is responsible for processing data accesses across the clusters and a remote store controller is responsible for processing data accesses across the clusters. These controllers work in conjunction to provide a deadlock avoidance system for preventing hangs.Type: GrantFiled: December 22, 2000Date of Patent: May 18, 2004Assignee: International Business Machines CorporationInventors: Gary A. Van Huben, Michael A. Blake, Pak-Kin Mak, Adrian Eric Seigler
-
Patent number: 6725334Abstract: To maximize the effective use of on-chip cache, a method and system for exclusive two-level caching in a chip-multiprocessor are provided. The exclusive two-level caching in accordance with the present invention involves method relaxing the inclusion requirement in a two-level cache system in order to form an exclusive cache hierarchy. Additionally, the exclusive two-level caching involves providing a first-level tag-state structure in a first-level cache of the two-level cache system. The first tag-state structure has state information. The exclusive two-level caching also involves maintaining in a second-level cache of the two-level cache system a duplicate of the first-level tag-state structure and extending the state information in the duplicate of the first tag-state structure, but not in the first-level tag-state structure itself, to include an owner indication.Type: GrantFiled: June 8, 2001Date of Patent: April 20, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: Luiz Andre Barroso, Kourosh Gharachorloo, Andreas Nowatzyk
-
Patent number: 6715037Abstract: A method and system for operating multiple communicating caches. Between caches, unnecessary transmission of repeated information is reduced. Pairs of communicating caches compress transmitted information, including noncacheable objects. A first cache refrains from unnecessarily transmitting the same information to a second cache when each already has a copy. This includes both maintaining a record at a first cache of information likely to be stored at a second cache, and transmitting a relatively short identifier for that information in place of the information itself. Caches are disposed in a graph structure, including a set of root caches and a set of leaf caches. Both root and leaf caches maintain noncacheable objects beyond their initial use, along with a digest of the non-cacheable objects. When a server devices returns identical information to a root cache, root caches can transmit only a digest to leaf caches, avoiding re-transmitting the entire noncacheable object.Type: GrantFiled: July 26, 2002Date of Patent: March 30, 2004Assignee: Blue Coat Systems, Inc.Inventor: Michael A. Malcolm
-
Patent number: 6711662Abstract: A shared-memory system includes processing modules communicating with each other through a network. Each of the processing modules includes a processor, a cache, and a memory unit that is locally accessible by the processor and remotely accessible via the network by all other processors. A home directory records states and locations of data blocks in the memory unit. A prediction facility that contains reference history information of the data blocks predicts a next requester of a number of the data blocks that have been referenced recently. The next requester is informed by the prediction facility of the current owner of the data block. As a result, the next requester can issue a request to the current owner directly without an additional hop through the home directory.Type: GrantFiled: March 29, 2001Date of Patent: March 23, 2004Assignee: Intel CorporationInventors: Jih-Kwon Peir, Konrad Lai
-
Publication number: 20040049636Abstract: Disclosed is a system, method, and program for transferring data. When a transaction commits, multiple data objects that have been changed by the transaction are identified. The multiple data objects are written from local storage to a cache structure using a batch write command. When changed data objects at a first system that are not cached in the shared external storage are written to disk, a batch cross invalidation command is used to invalidate the data objects at a second system. Additionally, multiple data objects are read from the cache structure into a processor storage using a batch castout command.Type: ApplicationFiled: September 9, 2002Publication date: March 11, 2004Applicant: International Business Machines CorporationInventors: John Joseph Campbell, David Arlen Elko, Jeffrey William Josten, Haakon Philip Roberts, David Harold Surman
-
Patent number: 6697849Abstract: System and method for caching JavaServer Page™ (JSP) component responses. The JSP components may be components that execute on an application server that supports networked applications, such as web applications or other Internet-based applications. One or more client computers, e.g., web servers, may perform requests referencing the JSP components on the application server. The execution of JSP components may be managed by a JSP engine process running on the application server. When a request referencing a JSP is received from a client computer, the JSP engine may first check a JSP response cache to determine whether a valid JSP response satisfying the request is present. If a matching cached response is found, then the response may be retrieved and immediately streamed back to the client. Otherwise, the referenced JSP may be executed. Each JSP file may comprise various SetCacheCriteria( ) method calls.Type: GrantFiled: May 1, 2000Date of Patent: February 24, 2004Assignee: Sun Microsystems, Inc.Inventor: Bjorn Carlson