Parallel Caches Patents (Class 711/120)
-
Patent number: 7421538Abstract: A storage control apparatus controls physical disks according to the host access using a pair of controllers, while mirroring processing is decreased when data is written to a cache memory and high-speed operation is enabled. The mirror management table is created with allocating the mirror area of the cache memory of the other controller, and acquisition of a mirror page of the cache memory of the other controller is executed referring to the mirror management table without an exchange of mirror page acquisition messages between the controllers.Type: GrantFiled: November 19, 2003Date of Patent: September 2, 2008Assignee: Fujitsu LimitedInventors: Joichi Bita, Daiya Nakamura
-
Patent number: 7418555Abstract: A multiprocessor system may have a plurality of processors and a memory unit. Each of the processors may include at least one cache memory. The memory unit may be shared by two of the processors. The multiprocessor system may further include a control unit. If the multiprocessor system receives an access request for a data block of the memory unit from one processor. The processors may also include a processing unit. When the processor shares a data block, the processing unit may invalidate the shared data block in the cache memory, write the shared data block from the write buffer to a memory unit, and forward an interrupt completion response to a control unit.Type: GrantFiled: June 23, 2004Date of Patent: August 26, 2008Assignee: Samsung Electronics Co., LtdInventor: Sung-Woo Chung
-
Patent number: 7404044Abstract: A system and method are provided for increasing the number of processors on a single integrated circuit to a number that is larger than would typically be possible to coordinate on a single bus. In an embodiment of the present invention a two-level memory coherency scheme is implemented for use by multiple processors operably coupled to multiple buses in the same integrated circuit. A control device, such as node controller, is used to control traffic between the two coherency levels. In an embodiment of the invention the first level of coherency is implemented using a “snoopy” protocol and the second level of coherency is a directory-based coherency scheme.Type: GrantFiled: September 15, 2004Date of Patent: July 22, 2008Assignee: Broadcom CorporationInventor: Laurent Moll
-
Patent number: 7395332Abstract: A method for searching network messages for pre-defined regular expressions is disclosed. A plurality of pre-defined regular expressions are stored in a content-addressable memory (CAM). A network message or selected portion thereof is inputted to the CAM for comparison with all of the regular expressions stored therein, the comparison with all CAM entries being done at the same time. An output is returned from the CAM. In response to the output from the CAM, identifying an action to be applied to the given network message or portion thereof that corresponds to a CAM entry matching the inputted network message or selected portion thereof.Type: GrantFiled: March 21, 2005Date of Patent: July 1, 2008Assignee: Cisco Technology, Inc.Inventors: Silvano Gai, Thomas J. Edsall
-
Publication number: 20080147975Abstract: A processor having multiple cores and a multiple cache segments, each core associated with one of the cache segments, the cache segments interconnected by a data communication ring, and logic to disallow operation of the ring at a startup event and to execute an initialization sequence at one or more of the cores so that each of the one or more of the cores operates using the cache segment associated with the core as a read-write memory during the initialization sequenceType: ApplicationFiled: December 13, 2006Publication date: June 19, 2008Inventors: Vincent J. Zimmer, Michael A. Rothman
-
Publication number: 20080126824Abstract: A serial communications architecture for communicating between hosts and data store devices. The Storage Link architecture is specially adapted to support communications between multiple hosts and storage devices via a switching network, such as a storage area network. The Storage Link architecture specifies various communications techniques that can be combined to reduce the overall cost and increase the overall performance of communications. The Storage Link architecture may provide packet ordering based on packet type, dynamic segmentation of packets, asymmetric packet ordering, packet nesting, variable-sized packet headers, and use of out-of-band symbols to transmit control information as described below in more detail. The Storage Link architecture may also specify encoding techniques to optimize transitions and to ensure DC-balance.Type: ApplicationFiled: July 25, 2007Publication date: May 29, 2008Applicant: Silicon Image, Inc.Inventors: Dongyun Lee, Yeshik Shin, David D. Lee, Deog-Kyoon Jeong, Shing Kong
-
Patent number: 7373458Abstract: There is described a cache memory system including a first cache memory and a second cache memory. A first port is arranged to receive a request for a first item and determine whether the first item is in the first cache memory. A second port is arranged to receive a request for a second item and determine whether the second item is in the second cache memory. The system is arranged such that if the second item is determined not to be in the second cache memory, a request for the second item is sent to the first port. There is also described a method of accessing multiple items from a memory which has associated with it a first cache memory having a first port and a second cache memory having a second port.Type: GrantFiled: June 9, 2004Date of Patent: May 13, 2008Inventor: Mark Owen Homewood
-
Patent number: 7370150Abstract: A processing system optimized for data string manipulations includes data string execution circuitry associated with a bus interface unit or memory controller. Cache coherency is maintained, and data move and compare operations may be performed efficiently on cached data. A barrel shifter for realignment of cached data during move operations and comparators for comparing a test data string to cached data a cache line at a time may be provided.Type: GrantFiled: November 26, 2003Date of Patent: May 6, 2008Assignee: Micron Technology, Inc.Inventor: Dean A. Klein
-
Patent number: 7346744Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in a multiple processor, multiple cluster system. Mechanisms for improving the accuracy of information available to a cache coherence controller are provided in order to allow the cache coherence controller to reduce the number of transactions in a multiple cluster system. Non-change probes and augmented non-change probe responses are provided to acquire state information in remote clusters without affecting the state of the probed memory line. Augmented probe responses associated with shared and invalidating probes are provided to update state information in a coherence directory during read and read/write probe requests.Type: GrantFiled: May 9, 2003Date of Patent: March 18, 2008Assignee: Newisys, Inc.Inventor: David Brian Glasco
-
Patent number: 7346738Abstract: An information distribution system includes an interconnect and multiple data processing nodes coupled to the interconnect. Each data processing node includes mass storage and a cache. Each data processing node also includes interface logic configured to receive signals from the interconnect and to apply the signals from the interconnect to affect the content of the cache, and to receive signals from the mass storage and to apply the signals from the mass storage to affect the content of the cache. The content of the mass storage and cache of a particular node may also be provided to other nodes of the system, via the interconnect.Type: GrantFiled: February 21, 2007Date of Patent: March 18, 2008Assignee: Broadband Royalty Corp.Inventor: Robert C Duzett
-
Publication number: 20080052465Abstract: A method of accessing cache memory for parallel processing processors includes providing a processor and a lower level memory unit. The processor utilizes multiple instruction processing members and multiple sub-cache memories corresponding to the instruction processing members. Next step is using a first instruction processing member to access a first sub-cache memory. The first instruction processing member will access the rest sub-cache memories when the first instruction processing member does not access the desired data successfully in the first instruction processing member. The first instruction processing member will access the lower level memory unit until the desired data have been accessed, when the first instruction processing member does not access the desired data successfully in the sub-memories. Then, the instruction processing member returns a result.Type: ApplicationFiled: August 23, 2006Publication date: February 28, 2008Inventor: Shi-Wu Lo
-
Patent number: 7302526Abstract: Handling a faulting memory of a pair of mirrored memories includes initially causing a non-faulting memory of the pair of mirrored memories to service all read and write operations for the pair of mirrored memories, determining that hardware corresponding to the faulting memory of the pair of mirrored memories has been successfully replaced to provide a new memory, in response to the new memory being provided, causing data to be copied from the non-faulting memory to the new memory while data is being read to and written from the non-faulting memory, and, in response to successful copying to the new memory, causing writes to be performed to both memories of the pair of mirrored memories and selecting one of the pair of mirrored memories for read operations when one or more read operations are performed.Type: GrantFiled: March 29, 2004Date of Patent: November 27, 2007Assignee: EMC CorporationInventors: Jerome J. Cartmell, Qun Fan, Steven T. McClure, Robert DeCrescenzo, Haim Kopylovitz, Eli Shagam
-
Patent number: 7299340Abstract: The disclosure is a data processing device with selective data cache architecture and a computer system including the data processing device. The data processing device is comprised of a microprocessor, a coprocessor, a microprocessor data cache, an X-data cache, and a Y-data cache. The microprocessor fetches and executes instructions, and the coprocessor carries out digital signal processing functions. The microprocessor data cache stores data provided from the microprocessor. The X-data cache stores a first group of data provided from the coprocessor while the Y-data cache stores a second group of data provided from the coprocessor.Type: GrantFiled: February 9, 2004Date of Patent: November 20, 2007Assignee: Samsung Electronics., Ltd.Inventors: Yun-Hwan Kim, Joong-Eon Lee, Kyoung-Mook Lim
-
Patent number: 7293196Abstract: A method, apparatus, and system for preserving the cache data of redundant storage controllers, by copying the recorded data blocks and the associated cache tags in the primary cache memory of a storage controller to a secondary cache memory of an alternate, redundant storage controller, wherein upon a failure occurring in the primary cache memory of any of the storage controllers, subsequent storage requests from a host, previously intended for processing by the failed storage controller, are processed through the secondary cache memory of a non-failed, redundant storage controller that contains the failed storage's controller cache data and cache tags.Type: GrantFiled: May 7, 2003Date of Patent: November 6, 2007Assignee: Xiotech CorporationInventors: Michael S. Hicken, James N. Snead
-
Patent number: 7287122Abstract: A method of managing a distributed cache structure having separate cache banks, by detecting that a given cache line has been repeatedly accessed by two or more processors which share the cache, and replicating that cache line in at least two separate cache banks. The cache line is optimally replicated in a cache bank having the lowest latency with respect to the given accessing processor. A currently accessed line in a different cache bank can be exchanged with a cache line in the cache bank with the lowest latency, and another line in the cache bank with lowest latency is moved to the different cache bank prior to the currently accessed line being moved to the cache bank with the lowest latency. Further replication of the cache line can be disabled when two or more processors alternately write to the cache line.Type: GrantFiled: October 7, 2004Date of Patent: October 23, 2007Assignee: International Business Machines CorporationInventors: Ramakrishnan Rajamony, Xiaowei Shen, Balaram Sinharoy
-
Patent number: 7254617Abstract: A distributed cache module that allows for a distributed cache between multiple servers of a network without using a central cache manager. The distributed cache module transmits each message with a logical timestamp. The distributed cache module of a server that receives the message will delay forwarding of the message to, for example, a client computer, if preceding timestamps are not received. This insures a correct order of timestamped messages without requiring a central manager to allocate and control the transmission of the messages within the network. Each distributed cache module will request and possibly retrieve data from the cache of another server in response to a file request for the data. The data of a file may be accessed by a plurality of servers joined in a file context.Type: GrantFiled: December 6, 2002Date of Patent: August 7, 2007Inventors: Karl Schuh, Chris Hawkinson, Scott Ruple, Tom Volden
-
Patent number: 7251723Abstract: A multiprocessor computer system implements fault resilient booting by using appliance server management. While previous systems have utilized fault resilient booting, it has required the use of a baseboard management controller chip. The present invention avoids the need for this chip by utilizing the appliance server management architecture. The testing of the processors and the determination of the bootstrap processor is controlled by the system I/O device utilizing a timer and a latch.Type: GrantFiled: June 19, 2001Date of Patent: July 31, 2007Assignee: Intel CorporationInventor: Son H. Lam
-
Patent number: 7249221Abstract: A storage system is arranged to speed up the operation and easily duplicate data without the capacity of the cache memory being so large even if lots of host computers are connected with the storage system. This storage system includes channel adapters, disk drives, disk adapters, and network switches. Further, the front side cache memories connected with the channel adapters and the back side cache memories connected with the disk adapters are provided as two layered cache system. When a request for writing data is given to the storage system by the host computer, the data is written in both the front side cache memory and the back side cache memory. The write data is duplicated by placing the write data in one of the front side cache memories and one of the back side cache memories or two of the back side cache memories.Type: GrantFiled: May 25, 2004Date of Patent: July 24, 2007Assignee: Hitachi, Ltd.Inventor: Kentaro Shimada
-
Patent number: 7240159Abstract: A data processor has a first cache memory with a large capacity and one port and a second cache memory with a small capacity and two ports disposed between a main memory and an instruction processing section. Data which is frequently used is stored in the first cache memory and data which is less frequently used is stored in the second cache memory under control of a controller responsive to prefetch instructions. One of the cache memories may be a set associative cache memory composed of a plurality of memory chips each having at least two memory banks and an output part to gain access to data sets consecutively and one at a time within the memory banks. On the basis of an address sent from the instruction processing section, a memory bank is selected, and a data set from the selected memory bank is supplied to the processing section.Type: GrantFiled: December 20, 2004Date of Patent: July 3, 2007Assignee: Hitachi, Ltd.Inventors: Takashi Hotta, Toshihiko Kurihara, Shigeya Tanaka, Hideo Sawamoto, Akiyoshi Osumi, Koji Saito, Kotaro Shimamura
-
Patent number: 7213107Abstract: A method and apparatus for a dedicated cache memory are described. Under an embodiment of the invention, a cache memory includes a general-purpose sector and a dedicated sector. The general-purpose sector is to be used for general computer operations. The dedicated sector is to be dedicated to use for a first computer process.Type: GrantFiled: December 31, 2003Date of Patent: May 1, 2007Assignee: Intel CorporationInventor: Blaise B. Fanning
-
Patent number: 7200718Abstract: An information distribution system includes an interconnect and multiple data processing nodes coupled to the interconnect. Each data processing node includes mass storage and a cache. Each data processing node also includes interface logic configured to receive signals from the interconnect and to apply the signals from the interconnect to affect the content of the cache, and to receive signals from the mass storage and to apply the signals from the mass storage to affect the content of the cache. The content of the mass storage and cache of a particular node may also be provided to other nodes of the system, via the interconnect.Type: GrantFiled: April 26, 2004Date of Patent: April 3, 2007Assignee: Broadband Royalty CorporationInventor: Robert C. Duzett
-
Patent number: 7151544Abstract: Cache access is optimized through identifying redundant accesses (read-requests made to identical system memory addresses), and issuing a single cache data request for each group of redundant accesses. One embodiment of the invention is a graphics system comprising a system memory that stores texture data, coupled to a texture cache that is coupled to one or more texture pipes. Each pipe processes information for a respective spatial bin. A cache preprocessor receives read-requests for texels from the texture pipes and generates a control code corresponding to each read-request, indicating whether the read-request is a redundant access, and linking redundant accesses to a single cache data request. The cache preprocessor provides the control codes and the read-requests to a cache arbiter, which issues the codes and the cache data requests to the texture cache.Type: GrantFiled: May 16, 2003Date of Patent: December 19, 2006Assignee: Sun Microsystems, Inc.Inventor: Brian D. Emberling
-
Patent number: 7142541Abstract: According to some embodiments, routing information for an information packet is determined in accordance with a destination address and a device address.Type: GrantFiled: August 9, 2002Date of Patent: November 28, 2006Assignee: Intel CorporationInventors: Alok Kumar, Raj Yavatkar
-
Patent number: 7120755Abstract: Cache coherency is maintained between the dedicated caches of a chip multiprocessor by writing back data from one dedicated cache to another without routing the data off-chip. Various specific embodiments are described, using write buffers, fill buffers, and multiplexers, respectively, to achieve the on-chip transfer of data between dedicated caches.Type: GrantFiled: January 2, 2002Date of Patent: October 10, 2006Assignee: Intel CorporationInventors: Sujat Jamil, Quinn W. Merrell, Cameron B. McNairy
-
Patent number: 7107409Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in a multiple processor, multiple cluster system. A cache coherence controller associated with a first cluster of processors can determine whether speculative probing at a first cluster can be performed to improve overall transaction efficiency. Intervening requests from a second cluster can be handled using information from the speculative probe at the first cluster.Type: GrantFiled: March 22, 2002Date of Patent: September 12, 2006Assignee: Newisys, Inc.Inventor: David B. Glasco
-
Patent number: 7103722Abstract: A method and structure is disclosed for constraining cache line replacement that processes a cache miss in a computer system. The invention contains a K-way set associative cache that selects lines in the cache for replacement. The invention constrains the selecting process so that only a predetermined subset of each set of cache lines is selected for replacement. The subset has at least a single cache line and the set size is at least two cache lines. The invention may further select between at least two cache lines based upon which of the cache lines was accessed least recently. A selective enablement of the constraining process is based on a free space memory condition of a memory associated with the cache memory. The invention may further constrain cache line replacement based upon whether the cache miss is from a non-local node in a nonuniform-memory-access system. The invention may also process cache writes so that a predetermined subset of each set is known to be in an unmodified state.Type: GrantFiled: July 22, 2002Date of Patent: September 5, 2006Assignee: International Business Machines CorporationInventors: Caroline Benveniste, Peter Franaszek, John T. Robinson, Charles Schulz
-
Patent number: 7103725Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in multiple processor, multiple cluster systems. A cache coherence controller associated with a first cluster of processors can determine whether speculative probing can be performed before forwarding a data access request to a second cluster. The cache coherence controller can send the data access request to the second cluster if the data access request can not be completed locally.Type: GrantFiled: March 22, 2002Date of Patent: September 5, 2006Assignee: Newisys, Inc.Inventor: David B. Glasco
-
Patent number: 7089372Abstract: Information regarding memory access by other nodes within a coherency controller of a node is locally stored. The coherency controller receives a transaction relating a line of local memory of the node. In response to locally determining that the line of the local memory is not being cached by another node and/or has not been modified by another node, the coherency controller processes the transaction without accessing tag directory information regarding the line. A table within the controller may store entries corresponding to local memory sections. Each entry includes a count value tracking a number of lines of the section being cached by other nodes, and a count value tracking a number of lines of the section that have been modified by other nodes. The table may also include flags corresponding to the sections, each flag indicating the validity of the section's contents.Type: GrantFiled: December 1, 2003Date of Patent: August 8, 2006Assignee: International Business Machines CorporationInventors: Donald R. DeSota, William Durr, Robert Joersz, Davis A. Miller
-
Patent number: 7085886Abstract: An improved storage controller and method for storing and recovering data are disclosed. The storage controller includes a first cluster for directing data from a host computer to a storage device and a second cluster for directing data from a host computer to a storage device. A first cache memory is connected to the first cluster and a second cache memory is connected to the second cluster. A first preserved area of memory is connected to the first cluster and a second preserved area of memory is connected to the second cluster. Data is directed to the first cache and backed up to the second preserved area in a normal operating mode. Similarly, data is directed to the second cache and backed up to the first preserved area in the normal operating mode. In the event of a power failure or comparable event, data from the first and second preserved areas are transferred to, and stored on, a first storage device.Type: GrantFiled: May 28, 2003Date of Patent: August 1, 2006Assignee: International Buisness Machines CorporationInventors: Yu-Cheng Hsu, Vernon J. Legvold
-
Patent number: 7076610Abstract: An integrated circuit memory device includes a quad-port cache memory device and a higher capacity supplemental memory device. These memory devices operate collectively as a high speed FIFO having fast fall through capability and extended data capacity. The FIFO does not require complex arbitration circuitry to oversee reading and writing operations. The supplemental memory device may be an embedded on-chip memory device or a separate off-chip memory device (e.g., DRAM, SRAM). The quad-port cache memory device utilizes a data rotation technique to support bus matching. Error detection and correction (EDC) circuits are also provided to check and correct FIFO read data. The EDC circuits operate without adding latency to FIFO read operations.Type: GrantFiled: July 3, 2003Date of Patent: July 11, 2006Assignee: Integrated Device Technology, Inc.Inventors: Mario Au, Jiann-Jeng Duh
-
Patent number: 7076609Abstract: Cache sharing for a chip multiprocessor. In one embodiment, a disclosed apparatus includes multiple processor cores, each having an associated cache. A control mechanism is provided to allow sharing between caches that are associated with individual processor cores.Type: GrantFiled: September 20, 2002Date of Patent: July 11, 2006Assignee: Intel CorporationInventors: Vivek Garg, Jagannath Keshava
-
Patent number: 7020750Abstract: A hybrid system for updating cache including a first computer system coupled to a database accessible by a second computer system, said second computer system including a cache, a cache update controller for concurrently implementing a user defined cache update policy, including both notification based cache updates and periodic based cache updates, wherein said cache updates enforce data coherency between said database and said cache, and a graphical user interface for selecting between said notification based cache updates and said periodic based cache updates.Type: GrantFiled: September 17, 2002Date of Patent: March 28, 2006Assignee: Sun Microsystems, Inc.Inventors: Pirasenna Thiyagaranjan, Krishnendu Chakraborty, Peter D. Stout, Xuesi Dong
-
Patent number: 7003630Abstract: A method and apparatus within a processing environment is provided for proxy management of a plurality of proxy caches connected to a plurality of processing elements or cores within a unified memory environment. The proxy management system includes a proxy processor, such as a RISC core, that monitors data transfers or ownership transfers between the processing elements. If the proxy processor determines that a data transfer in one of the proxy caches will affect the coherency within another proxy cache, the proxy processor executes proxy management instructions such as invalidate, flush, prefetch to the appropriate proxy caches to insure coherency between the proxy caches and the unified memory.Type: GrantFiled: June 27, 2002Date of Patent: February 21, 2006Assignee: MIPS Technologies, Inc.Inventor: Kevin D. Kissell
-
Patent number: 7000073Abstract: The invention provides a new linked structure for a buffer controller and management method thereof. The allocation and release actions of buffer memory can be more effectively processed when the buffer controller processes data packets. The linked structure enables the link node of the first buffer register to point to the last buffer register. The link node of the last buffer register points to the second buffer register. Each of the link nodes of the rest buffers points to the next buffer register in order until the last buffer register. This structure can effectively release the buffer registers in the used linked list to a free list.Type: GrantFiled: March 28, 2003Date of Patent: February 14, 2006Assignee: Via Technologies, Inc.Inventors: Murphy Chen, Perlman Hu
-
Patent number: 6996674Abstract: A method, apparatus, and article of manufacture provide the ability to maintain cache in a clustered environment. The cache is maintained in both a primary and secondary node. When data is requested, a symbolic list in a cache directory is examined to determine which node's cache contains the requested data. If the symbolic list indicates data is not currently in cache of any node, any node may be used as the secondary node. However, if an original primary node maintains the data in cache, the original primary node is selected as the secondary node. Once a new write I/O operation is performed, the symbolic list is updated to provide. To install a new node, after applying for cluster admission, symbolic information and a modified track list is requested. The modified track list is merged with new symbolic entries and the new node then broadcasts its availability to the cluster.Type: GrantFiled: May 7, 2001Date of Patent: February 7, 2006Assignee: International Business Machines CorporationInventors: Lawrence Yium-chee Chiu, Windsor Wee Sun Hsu, Honesty Cheng Young
-
Patent number: 6996657Abstract: An apparatus for providing packets in a peripheral interface circuit of an I/O node of a computer system. The apparatus includes a buffer that may be configured to accumulate data received on a first bus. The apparatus further includes a control unit coupled to the buffer which may be configured to transmit a data packet containing a first number of bytes of the data in response to detecting that any of the bytes of the data is invalid. The control unit may be further configured to transmit the data packet containing a second number of bytes of the data in response to detecting that all of the bytes are valid.Type: GrantFiled: March 21, 2002Date of Patent: February 7, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Eric G. Chambers, Tahsin Askar
-
Patent number: 6996678Abstract: A cache controller is disclosed. The cache controller includes potential replacement list, a plurality of valid bits and a number of counters. The potential replacement list includes a number of entries. Each of the valid bits corresponds to one of the entries. Each of the counters also corresponds to the one of the entries.Type: GrantFiled: July 31, 2002Date of Patent: February 7, 2006Assignee: Cisco Technology, Inc.Inventor: Rajan Sharma
-
Patent number: 6986002Abstract: The present invention provides for a bus system having a local bus ring coupled to a remote bus ring. A processing unit is coupled to the local bus node and is employable to request data. A cache is coupled to the processing unit through a command bus. A cache investigator, coupled to the cache, is employable to determine whether the cache contains the requested data. The cache investigator is further employable to generate and broadcast cache utilization parameters, which contain information as to the degree of accessing the cache by other caches, its own associated processing unit, and so on. In one aspect, the cache is a local cache. In another aspect, the cache is a remote cache.Type: GrantFiled: December 17, 2002Date of Patent: January 10, 2006Assignee: International Business Machines CorporationInventor: Ram Raghavan
-
Patent number: 6981101Abstract: A multiprocessor system and method includes a processing sub-system having a plurality of processors and a processor memory system. A scalable network is operable to couple the processing sub-system to an input/output (I/O) sub-system. The I/O sub-system includes a plurality of I/O interfaces. Each I/O interface has a local cache and is operable to couple a peripheral device to the multiprocessor system and to store copies of data from the processor memory system in the local cache for use by the peripheral device. A coherence domain for the multiprocessor system includes the processors and processor memory system of the processing sub-system and the local caches of the I/O sub-system.Type: GrantFiled: July 20, 2001Date of Patent: December 27, 2005Assignee: Silicon Graphics, Inc.Inventors: Steven C. Miller, Daniel E. Lenoski, Kevin Knecht, George Hopkins, Michael S. Woodacre
-
Patent number: 6976125Abstract: One embodiment of the present invention provides a system for predicting hot spots in a cache memory. Upon receiving a memory operation at the cache, the system determines a target location within the cache for the memory operation. Once the target location is determined, the system increments a counter associated with the target location. If the counter reaches a pre-determined threshold value, the system generates a signal indicating that the target location is a hot spot in the cache memory.Type: GrantFiled: January 29, 2003Date of Patent: December 13, 2005Assignee: Sun Microsystems, Inc.Inventors: Sudarshan Kadambi, Vijay Balakrishnan, Wayne I. Yamamoto
-
Patent number: 6970975Abstract: A method for performing efficient caching through an enumeration process is provided. The objects residing on the storage medium are cached in the order that these objects are kept in the directory of the storage medium. As a result, the directory content is enumerated in the cache. Therefore, the cache does not have to be associated with the server layout. Moreover, it is further possible to support a hierarchy of distributed caches using the disclosed invention.Type: GrantFiled: November 15, 2002Date of Patent: November 29, 2005Assignee: Exanet Co.Inventor: Shahar Frank
-
Patent number: 6959361Abstract: One embodiment of the present invention provides a memory controller that contains a distributed cache that stores cache lines for pending memory operations. This memory controller includes an input that receives memory operations that are directed to an address in memory. It also includes a central scheduling unit and multiple agents that operate under control of the central scheduling unit. Upon receiving a current address, a given agent compares the current address with a cache line stored within the given agent. All of the agents compare the current address with their respective cache line in parallel. If the addresses match, the agent reports the result to the rest of the agents in the memory controller, and accesses data within the matching cache line stored within the agent to accomplish the memory operation.Type: GrantFiled: April 25, 2002Date of Patent: October 25, 2005Assignee: Sun Microsystems, Inc.Inventors: Jurgen M. Schulz, David C. Stratman
-
Patent number: 6950907Abstract: A dirty memory subsystem includes storage operable to store redundant copies of dirty indicators. Each dirty indicator is associated with a respective block of main memory and is settable to a predetermined state to indicate that the block of main memory associated therewith has been dirtied. By providing redundant storage for the dirty indicators, any difference between the stored copies of the dirty indicators can be considered as indicative of memory corruption, for example as a result of a cosmic ray impact. As the different copies can be stored in different locations, it is unlikely that a cosmic ray impact would affect all copies equally. If a difference between the stored copies is detected, then the dirty indicator can be take as being unreliable and remedial action can be taken. For example, it can be assumed that a block of main memory has been dirtied if any of the copies of the dirty indicator has the predetermined state.Type: GrantFiled: August 24, 2001Date of Patent: September 27, 2005Assignee: Sun Microsystems, Inc.Inventors: Paul Jeffrey Garnett, Jeremy Graham Harris
-
Patent number: 6947971Abstract: Methods and apparatus for caching information associated with packets are disclosed. According to one aspect of the present invention, a system for processing a packet includes a controller with a processor and a controller data cache, a bus, a memory interface, and a separate data cache. The memory interface may be accessed by the controller via the bus, and is arranged to be in communication with a substantially external memory. The separate data cache, which is also in communication with the controller via the bus, caches information associated with the packet such that the controller accesses the separate data cache to obtain the information associated with the packet when the controller needs to decide how to process the packet.Type: GrantFiled: May 9, 2002Date of Patent: September 20, 2005Assignee: Cisco Technology, Inc.Inventor: James A. Amos
-
Patent number: 6948032Abstract: One embodiment of the present invention provides a system that uses a hot spot cache to alleviate the performance problems caused by hot spots in cache memories, wherein the hot spot cache stores lines that are evicted from hot spots in the cache. Upon receiving a memory operation at the cache, the system performs a lookup for the memory operation in both the cache and the hot spot cache in parallel. If the memory operation is a read operation that causes a miss in the cache and a hit in the hot spot cache, the system reads a data line for the read operation from the hot spot cache, writes the data line to the cache, performs the read operation on the data line in the cache, and then evicts the data line from the hot spot cache.Type: GrantFiled: January 29, 2003Date of Patent: September 20, 2005Assignee: Sun Microsystems, Inc.Inventors: Sudarshan Kadambi, Vijay Balakrishnan, Wayne I. Yamamoto
-
Patent number: 6938128Abstract: A processor (500) issues a read request for data. A processor interface (24) initiates a local search for the requested data and also forwards the read request to a memory directory (24) for processing. While the read request is processing, the processor interface (24) can determine if the data is available locally. If so, the data is transferred to the processor (500) for its use. The memory directory (24) processes the read request and generates a read response therefrom. The processor interface (24) receives the read response and determines whether the data was available locally. If so, the read response is discarded. If the data was not available locally, the processor interface (24) provides the read response to the processor (500).Type: GrantFiled: December 2, 2003Date of Patent: August 30, 2005Assignee: Silicon Graphics, Inc.Inventors: Jeffrey S. Kuskin, William A. Huffman
-
Patent number: 6934801Abstract: A cache memory in a disk device is constituted by a first cache memory for holding predetermined data to be written in the storing medium, a second cache memory for holding predetermined status read out from the storing medium and a third cache memory for holding predetermined data designated by the upper rank host.Type: GrantFiled: May 2, 2002Date of Patent: August 23, 2005Assignee: NEC CorporationInventor: Toshikazu Takai
-
Patent number: 6931505Abstract: One embodiment of a distributed memory module cache includes tag memory and associated logic implemented at the memory controller end of a memory channel. The memory controller is coupled to at least one memory module by way of a point-to-point interface. The data cache and associated logic are located in one or more buffer components on each of the memory modules. The memory controller communicates with the memory module via a variety of commands. Included in these commands are an activate command and a cache fetch command. A command is delivered from the memory controller to the memory modules over four transfer periods. The activate command and the cache fetch command have formats that differ only in the information delivered in the fourth transfer period. A read command and a read and preload command similarly differ only in the information delivered over the fourth transfer period.Type: GrantFiled: December 31, 2001Date of Patent: August 16, 2005Assignee: Intel CorporationInventor: Howard S. David
-
Patent number: 6901485Abstract: A computer system includes a home node and one or more remote nodes coupled by a node interconnect. The home node includes a local interconnect, a node controller coupled between the local interconnect and the node interconnect, a home system memory, a memory directory including a plurality of entries, and a memory controller coupled to the local interconnect, the home system memory and the memory directory. The memory directory includes a plurality of entries that each provide an indication of whether or not an associated data granule in the home system memory has a corresponding cache line held in at least one remote node. The memory controller includes demand invalidation circuitry that, responsive to a data request for a requested data granule in the home system memory, reads an associated entry in the memory directory and issues an invalidating command to at least one remote node holding a cache line corresponding to the requested data granule.Type: GrantFiled: June 21, 2001Date of Patent: May 31, 2005Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, James Stephen Fields, Jr.
-
Patent number: 6898675Abstract: Where a null response can be expected from devices snooping a load operation, data may be used by a requesting processor prior to the coherency response window. A null snoop response may be determined, for example, from the availability of the data without a bus transaction. The capability of accelerating data in this fashion requires only a few simple changes in processor state transitions, required to permit entry of the data completion wait state prior to the response wait state. Processors may forward accelerated data to execution units with the expectation that a null snoop response will be received during the coherency response window. If a non-null snoop response is received, an error condition is asserted. Data acceleration of the type described allows critical data to get back to the processor without waiting for the coherency response window.Type: GrantFiled: August 24, 1998Date of Patent: May 24, 2005Assignee: International Business Machines CorporationInventors: Alexander Edward Okpisz, Thomas Albert Petersen