Private Caches Patents (Class 711/121)
  • Patent number: 7392345
    Abstract: An improved method and system for client-side caching that transparently caches suitable network files for offline use. A cache mechanism in a network redirector transparently intercepts requests to access server files, and if the requested file is locally cached, satisfies the request from the cache when possible. Otherwise the cache mechanism creates a local cache file and satisfies the request from the server, and also fills in a sparse cached file as reads for data in ranges that are missing in the cached file are requested and received from the server. A background process also fills in local files that are sparse, using the existing handle of already open server files, or opening, reading from and closing other server files. Security is also provided by maintaining security information received from the server for files that are in the cache, and using that security information to determine access to the file when offline.
    Type: Grant
    Filed: August 7, 2006
    Date of Patent: June 24, 2008
    Assignee: Microsoft Corporation
    Inventors: Shishir Pardikar, Joseph L. Linn, Balan Sethu Raman, Robert E. Corrington
  • Patent number: 7392346
    Abstract: A memory having multiple locations for data storage is updated by performing the following method. The memory locations are grouped into commonly accessible groups of one or more data locations. First, a control array is provided. The control array is associated with a predetermined type of memory update operation, and has a local indicator for each commonly accessible group of memory locations respectively. Next, the instruction stream to the memory is monitored to determine the current memory operation type, and the set of groups of memory locations upon which the current operation is to be performed. If the current memory operation is an operation of the predetermined type, the control array is updated. If the current operation is an operation other than the predetermined type, the state of the respective local indicator of each group of the set is determined. The current operation is then performed upon each group in the set in accordance with the state of its respective local indicator.
    Type: Grant
    Filed: December 28, 2006
    Date of Patent: June 24, 2008
    Assignee: Analog Devices, Inc.
    Inventor: Alberto Rodrigo Mandler
  • Patent number: 7350027
    Abstract: A method and apparatus for hardware support of the thread level speculation for existing processor cores without having to change the existing processor core, processor core's interface, or existing caches on the L1, L2 or L3 level. Architecture support for thread speculative execution by adding a new cache level for storing speculative values and a dedicated bus for forwarding speculative values and control. The cache level is hierarchically positioned between the cache levels L1 and L2 cache levels.
    Type: Grant
    Filed: February 10, 2006
    Date of Patent: March 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: Alan G. Gara, Valentina Salapura
  • Patent number: 7346738
    Abstract: An information distribution system includes an interconnect and multiple data processing nodes coupled to the interconnect. Each data processing node includes mass storage and a cache. Each data processing node also includes interface logic configured to receive signals from the interconnect and to apply the signals from the interconnect to affect the content of the cache, and to receive signals from the mass storage and to apply the signals from the mass storage to affect the content of the cache. The content of the mass storage and cache of a particular node may also be provided to other nodes of the system, via the interconnect.
    Type: Grant
    Filed: February 21, 2007
    Date of Patent: March 18, 2008
    Assignee: Broadband Royalty Corp.
    Inventor: Robert C Duzett
  • Patent number: 7305523
    Abstract: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss.
    Type: Grant
    Filed: February 12, 2005
    Date of Patent: December 4, 2007
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, William John Starke, Derek Edward Williams
  • Patent number: 7305522
    Abstract: A method, system, and device for enabling intervention across same-level cache memories. In a preferred embodiment, responsive to a cache miss in a first cache memory a direct intervention request is sent from the first cache memory to a second cache memory requesting a direct intervention that satisfies the cache miss. In an alternate embodiment, direct intervention is utilized to access a same-level victim cache.
    Type: Grant
    Filed: February 12, 2005
    Date of Patent: December 4, 2007
    Assignee: International Business Machines Corporation
    Inventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, Bradley David McCredie, William John Starke
  • Patent number: 7246205
    Abstract: Methods, software and systems of dynamically controlling push cache operations are presented. One method, which may also be implemented in software and/or hardware, monitors performance parameters and enables or disables push cache operations depending on whether the performance parameters are within a predetermined range. Another method, which may also be implemented in software and/or hardware, monitors an amount of credits associated with a device and enables or disables push cache operations dependent upon whether the device has sufficient remaining credits.
    Type: Grant
    Filed: December 22, 2004
    Date of Patent: July 17, 2007
    Assignee: Intel Corporation
    Inventors: Santosh Balakrishnan, Raj Yavatkar, Charles Narad
  • Patent number: 7240143
    Abstract: A low-latency storage memory system is built from multiple memory units such as high-density random access memory. Multiple access ports provide access to memory units and send the resultant data out interface ports. The memory units communicate with the access ports through an interconnected mesh to allow any access port to access any memory unit. An address virtualization mechanism using address translators allows any access port of the memory storage system to access requested data as abstract objects without regard for the physical memory unit that the data is located in, or the absolute memory addresses within that memory unit.
    Type: Grant
    Filed: December 8, 2003
    Date of Patent: July 3, 2007
    Assignee: Broadbus Technologies, Inc.
    Inventors: Robert G. Scheffler, Michael A. Kahn, Frank J. Stifter
  • Patent number: 7213107
    Abstract: A method and apparatus for a dedicated cache memory are described. Under an embodiment of the invention, a cache memory includes a general-purpose sector and a dedicated sector. The general-purpose sector is to be used for general computer operations. The dedicated sector is to be dedicated to use for a first computer process.
    Type: Grant
    Filed: December 31, 2003
    Date of Patent: May 1, 2007
    Assignee: Intel Corporation
    Inventor: Blaise B. Fanning
  • Patent number: 7200718
    Abstract: An information distribution system includes an interconnect and multiple data processing nodes coupled to the interconnect. Each data processing node includes mass storage and a cache. Each data processing node also includes interface logic configured to receive signals from the interconnect and to apply the signals from the interconnect to affect the content of the cache, and to receive signals from the mass storage and to apply the signals from the mass storage to affect the content of the cache. The content of the mass storage and cache of a particular node may also be provided to other nodes of the system, via the interconnect.
    Type: Grant
    Filed: April 26, 2004
    Date of Patent: April 3, 2007
    Assignee: Broadband Royalty Corporation
    Inventor: Robert C. Duzett
  • Patent number: 7181539
    Abstract: Data is synchronized among multiple web servers, each of which is coupled to a common data server. Each web server retrieves a scheduled activation time from the data server. If the current time is prior to the scheduled activation time, then each web server retrieves updated data from the data server into a staging cache in the web server. At the scheduled activation time, each web server copies data from its staging cache to an active cache in the web server. If a new web server is added or an existing web server is initiated, then data is copied from an active cache in the data server to an active cache in the new or initialized web server. The multiple web servers may be arranged to form a web farm.
    Type: Grant
    Filed: September 1, 1999
    Date of Patent: February 20, 2007
    Assignee: Microsoft Corporation
    Inventors: Kenneth J. Knight, David J. Messner
  • Patent number: 7174431
    Abstract: The invention provides a system and method for resolving ambiguous invalidate messages received by an entity of a computer system. An invalidate message is considered ambiguous when the receiving entity cannot tell whether it applies to a previously victimized memory block or to a memory block that the entity is waiting to receive. When an entity receives such an invalidate message, it stores the message in its miss address file (MAF). When the entity subsequently receives the memory block, the entity “replays” the Invalidate message from its MAF by invalidating the block from its cache and issuing an Acknowledgement (Ack) to the entity that triggered issuance of the Invalidate message command.
    Type: Grant
    Filed: October 25, 2005
    Date of Patent: February 6, 2007
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen R. Van Doren, Gregory E. Tierney
  • Patent number: 7142541
    Abstract: According to some embodiments, routing information for an information packet is determined in accordance with a destination address and a device address.
    Type: Grant
    Filed: August 9, 2002
    Date of Patent: November 28, 2006
    Assignee: Intel Corporation
    Inventors: Alok Kumar, Raj Yavatkar
  • Patent number: 7114156
    Abstract: A system and method for generating a key list structure forming a queue of users' work flow requests in a queuing system such that many requests from a single user will not prevent processing of requests from other users in the queuing system. The key list structure comprises keys associated with users' work flow requests, each key indicating a priority level associated with a request, a user identification (User ID) associated with a requestor, and, an assigned user priority value (UPV). The method assigns UPVs to user requests in a manner such that user request entries become interleaved in the key list structure to prevent a single user from dominating the request processing.
    Type: Grant
    Filed: March 29, 2002
    Date of Patent: September 26, 2006
    Assignee: International Business Machines Corporation
    Inventors: Cuong M. Le, Glenn R. Wilcock
  • Patent number: 7086056
    Abstract: A processor unit executes a failure detection program for a vehicle. The failure detection program includes a first failure detection process of a high priority level, a second failure detection process of a moderate priority level and a memory manipulation process of a low priority level. Each of the failure detection processes requests memory manipulation by generating an event as the need arises. When the memory manipulation process is activated, it performs the requested memory manipulation in the same order as the memory manipulation is requested so that execution of memory manipulation requested by one of the failure detection processes is not interrupted by execution of memory manipulation requested by the other of the failure detection processes. However, each of the failure detection processes itself can be executed interrupting the execution of memory manipulation process because of its higher priority level.
    Type: Grant
    Filed: March 15, 2002
    Date of Patent: August 1, 2006
    Assignee: Denso Corporation
    Inventor: Toshiyuki Fukushima
  • Patent number: 7076613
    Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    Type: Grant
    Filed: January 21, 2004
    Date of Patent: July 11, 2006
    Assignee: Intel Corporation
    Inventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
  • Patent number: 7062606
    Abstract: A multi-threaded embedded processor that includes an on-chip deterministic (e.g., scratch or locked cache) memory that persistently stores all instructions associated with one or more pre-selected high-use threads. The processor executes general (non-selected) threads by reading instructions from an inexpensive external memory, e.g., by way of an on-chip standard cache memory, or using other potentially slow, non-deterministic operation such as direct execution from that external memory that can cause the processor to stall while waiting for instructions to arrive. When a cache miss or other blocking event occurs during execution of a general thread, the processor switches to the pre-selected thread, whose execution with zero or minimal delay is guaranteed by the deterministic memory, thereby utilizing otherwise wasted processor cycles until the blocking event is complete.
    Type: Grant
    Filed: May 7, 2003
    Date of Patent: June 13, 2006
    Assignee: Infineon Technologies AG
    Inventors: Robert E. Ober, Roger D. Arnold, Daniel Martin, Erik K. Norden
  • Patent number: 7058770
    Abstract: A recording area of a hard disk is managed by units. Each unit comprises physically continuous recording regions having predetermined size, and usage status of the unit is stored in an allocation unit (AU) management table. Data requiring a real time processing is stored in each unit. When the usage status of the unit is changed or modified due to the recording or deletion of data, the AU management table is updated and then information indicative of the fact that the AU management table has been updated is recorded. An apparatus where the final change is made is detected at the time the hard disk is inserted or a file system is initialized. And if the apparatus which last made the change adopts the file system which does not use the AU management table, then the table is reconfigured so as to retain the consistency of the AU management table.
    Type: Grant
    Filed: October 2, 2003
    Date of Patent: June 6, 2006
    Assignees: Sanyo Electric Co., Ltd., Sharp Corporation, Victor Company of Japan, Limited, Pioneer Corporation, Hitachi, Ltd., Fujitsu Limited
    Inventors: Yuichi Kanai, Yoshihiro Hori, Ryoji Ohno, Takeo Ohishi, Kenichiro Tada, Tatsuya Hirai, Jun Kamada
  • Patent number: 7000073
    Abstract: The invention provides a new linked structure for a buffer controller and management method thereof. The allocation and release actions of buffer memory can be more effectively processed when the buffer controller processes data packets. The linked structure enables the link node of the first buffer register to point to the last buffer register. The link node of the last buffer register points to the second buffer register. Each of the link nodes of the rest buffers points to the next buffer register in order until the last buffer register. This structure can effectively release the buffer registers in the used linked list to a free list.
    Type: Grant
    Filed: March 28, 2003
    Date of Patent: February 14, 2006
    Assignee: Via Technologies, Inc.
    Inventors: Murphy Chen, Perlman Hu
  • Patent number: 6996678
    Abstract: A cache controller is disclosed. The cache controller includes potential replacement list, a plurality of valid bits and a number of counters. The potential replacement list includes a number of entries. Each of the valid bits corresponds to one of the entries. Each of the counters also corresponds to the one of the entries.
    Type: Grant
    Filed: July 31, 2002
    Date of Patent: February 7, 2006
    Assignee: Cisco Technology, Inc.
    Inventor: Rajan Sharma
  • Patent number: 6996657
    Abstract: An apparatus for providing packets in a peripheral interface circuit of an I/O node of a computer system. The apparatus includes a buffer that may be configured to accumulate data received on a first bus. The apparatus further includes a control unit coupled to the buffer which may be configured to transmit a data packet containing a first number of bytes of the data in response to detecting that any of the bytes of the data is invalid. The control unit may be further configured to transmit the data packet containing a second number of bytes of the data in response to detecting that all of the bytes are valid.
    Type: Grant
    Filed: March 21, 2002
    Date of Patent: February 7, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Eric G. Chambers, Tahsin Askar
  • Patent number: 6990559
    Abstract: The invention provides a system and method for resolving ambiguous invalidate messages received by an entity of a computer system. An invalidate message is considered ambiguous when the receiving entity cannot tell whether it applies to a previously victimized memory block or to a memory block that the entity is waiting to receive. When an entity receives such an invalidate message, it stores the message in its miss address file (MAF). When the entity subsequently receives the memory block, the entity “replays” the Invalidate message from its MAF by invalidating the block from its cache and issuing an Acknowledgement (Ack) to the entity that triggered issuance of the Invalidate message command.
    Type: Grant
    Filed: October 3, 2002
    Date of Patent: January 24, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen R. Van Doren, Gregory E. Tierney
  • Patent number: 6985999
    Abstract: A microprocessor prioritizes cache line fill requests according to request type rather than issuing the requests in program order. In one embodiment, the request types include blocking accesses at highest priority, non-blocking page table walk accesses at medium priority, and non-blocking store allocation and prefetch accesses at lowest priority. The microprocessor takes advantage of the fact that the core logic clock frequency is a multiple of the processor bus clock frequency, typically by an order of magnitude. The microprocessor accumulates the various requests generated by the core logic each core clock cycle during a bus clock cycle. The microprocessor waits until the last core clock cycle before the next bus clock cycle to prioritize the accumulated requests and issues the highest priority request on the processor bus.
    Type: Grant
    Filed: October 18, 2002
    Date of Patent: January 10, 2006
    Assignee: IP-First, LLC
    Inventors: G. Glenn Henry, Rodney E. Hooker
  • Patent number: 6986018
    Abstract: A cache server includes a media serving engine that is capable of distributing media content. A cache engine is coupled to the media serving engine and capable of caching media content. A set of cache policies is accessible by the cache engine to define the operation of the cache engine. The cache server can be configured to operate as either a cache server or an origin server. The cache server also includes a data communication interface coupled to the cache engine and the media serving engine to allow the cache engine to receive media content across a network and to allow the media serving engine to distribute media content across the network. The cache policies include policies for distributing media content from the media server, policies for handling cache misses, and policies for prefetching media content.
    Type: Grant
    Filed: June 26, 2001
    Date of Patent: January 10, 2006
    Assignee: Microsoft Corporation
    Inventors: Bret P. O'Rourke, Dawson F. Dean, Chih-Kan Wang, Mark D. Van Antwerp, David J. Roth, Chadd B. Knowlton
  • Patent number: 6973539
    Abstract: A multiprocessor write-into-cache data processing system includes a feature for preventing hogging of ownership of a first gateword stored in the memory which governs access to a first common code/data set shared by processes running in the processors by imposing first delays on all other processors in the system while, at the same time, mitigating any adverse effect on performance of processors attempting to access a gateword other than the first gateword. This is achieved by starting a second delay in any processor which is seeking ownership of a gateword other than the first gateword and truncating the first delay in all such processors by subtracting the elapsed time indicated by the second delay from the elapsed time indicated by the first delay.
    Type: Grant
    Filed: April 30, 2003
    Date of Patent: December 6, 2005
    Assignee: Bull HN Information Systems Inc.
    Inventors: Charles P. Ryan, Wayne R. Buzby
  • Patent number: 6963953
    Abstract: It assumes that “SO” represents a state, in which that data in a responsible region storing the data to be accessed most frequently by the corresponding processor is updated in a cache memory to be controlled by a cache device, and other data is stored in other cache memory. In this case, one or more cache devices controlling each of the remaining cache memories changes the state of the data in a region other than the responsible region of the cache memory controlled by itself from “SN” to “I” (invalid). Therefore, in the case where the data designated by the same address is shared by the plurality of cache memories, the data can be invalidated in the cache memories other than the cache memory corresponding to the processor including the designated address in its own responsible region. Therefore, a data sharing rate can be low.
    Type: Grant
    Filed: August 8, 2002
    Date of Patent: November 8, 2005
    Assignee: Renesas Technology Corp.
    Inventor: Masami Nakajima
  • Patent number: 6957313
    Abstract: An apparatus and method for storing, manipulating, processing, and transferring data in a memory matrix (105). The matrix (105) includes a number of multi-ported memory devices (250) arranged in banks (260), each of the devices capable of storing data, a memory controller (265) for accessing the devices, and a cache (270) with an allocation table stored therein to describe data stored in the matrix. Preferably, the matrix (105) is used in a modular, network-centric memory system (100), which has a management module (125) to interface between the matrix and a network (120) of data processing systems (115), the network based on either physical or wireless connections. Optionally, the system (100) further includes a non-volatile storage module (130), an off-line storage module (135), and an uninterruptible power supply (140).
    Type: Grant
    Filed: November 30, 2001
    Date of Patent: October 18, 2005
    Inventors: James R. Hsia, Yan Chiew Chow
  • Patent number: 6950908
    Abstract: The processors #0 to #3 execute a plurality of threads whose execution sequence is defined, in parallel. When the processor #1 that executes a thread updates the self-cache memory #1, if the data of the same address exists in the cache memory #2 of the processor #2 that executes a child thread, it updates the cache memory #2 simultaneously, but even if it exists in the cache memory #0 of the processor #0 that executes a parent thread, it doesn't rewrite the cache memory #0 but only records that rewriting has been performed in the cache memory #1. When the processor #0 completes a thread, a cache line with the effect that the data has been rewritten recorded from a child thread may be invalid and a cache line without such record is judged to be effective. Whether a cache line which may be invalid is really invalid or effective is examined during execution of the next thread.
    Type: Grant
    Filed: July 10, 2002
    Date of Patent: September 27, 2005
    Assignee: NEC Corporation
    Inventors: Atsufumi Shibayama, Satoshi Matsushita
  • Patent number: 6948010
    Abstract: The present invention relates to a method and system for transferring portions of a memory block. A first data mover is configured with a first start address corresponding to a first portion of a source memory block. A second data mover is configured with a second start address corresponding to a second portion of the source memory block sized differently from the first portion. The first portion of the source memory block is transferred by the first data mover and the second portion of the source memory block is transferred by the second data mover.
    Type: Grant
    Filed: December 20, 2000
    Date of Patent: September 20, 2005
    Assignee: Stratus Technologies Bermuda Ltd.
    Inventors: Jeffrey Somers, Andrew Alden, John Edwards
  • Patent number: 6871268
    Abstract: Techniques for improved cache management including cache replacement are provided. In one aspect, a distributed caching technique of the invention comprises the use of a central cache and one or more local caches. The central cache communicates with the one or more local caches and coordinates updates to the local caches, including cache replacement. The invention also provides techniques for adaptively determining holding times associated with data storage applications such as those involving caches.
    Type: Grant
    Filed: March 7, 2002
    Date of Patent: March 22, 2005
    Assignee: International Business Machines Corporation
    Inventors: Arun Kwangil Iyengar, Isabelle M. Rouvellou
  • Patent number: 6868483
    Abstract: In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use.
    Type: Grant
    Filed: September 26, 2002
    Date of Patent: March 15, 2005
    Assignee: Bull HN Information Systems Inc.
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Patent number: 6865649
    Abstract: A system and method for pre-fetching data. A computer program comprising multiple basic blocks is submitted to a processor for execution. Tables or other data structures are associated with some or all of the basic blocks (e.g., a table is associated with, or stores, an instruction address of a particular basic block). During execution of a basic block, memory locations of data elements accessed during the executions are stored in the associated table. After a threshold number of executions, differences between memory locations of the data elements in successive executions are then computed. The differences are applied to the last stored memory locations to generate estimates of the locations for the data elements for a subsequent execution. Using the estimated locations, the data elements can be pre-fetched before, or as, the basic block is executed.
    Type: Grant
    Filed: October 10, 2002
    Date of Patent: March 8, 2005
    Assignee: Sun Microsystems, Inc.
    Inventor: Gian-Paolo D. Musumeci
  • Patent number: 6859864
    Abstract: A method and apparatus are described for providing an implicit write-back in a distributed shared memory environment implementing a snoop based architecture. A requesting node submits a single read request to a snoop based architecture controller switch. The switch recognizes that another node other than the requesting node and the home node for the desired data has a copy of the data. The switch directs the request to the responding node that is not the home node. The responding node, having modified the data, provides a single response back to the switch that causes the switch to both update the data at the home node and answer the requesting node. The updating of the data at the home node is done without receiving an explicit write instruction from the requesting node.
    Type: Grant
    Filed: December 29, 2000
    Date of Patent: February 22, 2005
    Assignee: Intel Corporation
    Inventors: Manoj Khare, Lily P. Looi, Akhilesh Kumar, Kenneth C. Creta
  • Patent number: 6857052
    Abstract: Any of the processors CPU1 to CPUn turns the miss hit detecting signal line 5 to a low level upon detecting occurrence of a miss hit. In response, the mode switching controller 2 is notified of the occurrence of a miss hit and switches each of the processors CPU1 to CPUn to the synchronous operation mode. Also, each command from each of the processors CPU1 to CPUn is appended with a tag. When each of the processors CPU1 to CPUn feeds the synchronization detecting signal line 6 with the tags which are identical as designated as a synchronous point, the operation of the processors can be switched to the non-synchronous operation mode by the mode switching controller 2.
    Type: Grant
    Filed: February 21, 2002
    Date of Patent: February 15, 2005
    Assignee: Semiconductor Technology Academic Research Center
    Inventor: Hideharu Amano
  • Publication number: 20040268054
    Abstract: The invention provides a cache management system comprising in various embodiments pre-load and pre-own functionality to enhance cache efficiency in shared memory distributed cache multiprocessor computer systems. Some embodiments of the invention comprise an invalidation history table to record the line addresses of cache lines invalidated through dirty or clean invalidation, and which is used such that invalidated cache lines recorded in an invalidation history table are reloaded into cache by monitoring the bus for cache line addresses of cache lines recorded in the invalidation history table. In some further embodiments, a write-back bit associated with each L2 cache entry records when either a hit to the same line in another processor is detected or when the same line is invalidated in another processor's cache, and the system broadcasts write-backs from the selected local cache only when the line being written back has a write-back bit that has been set.
    Type: Application
    Filed: January 21, 2004
    Publication date: December 30, 2004
    Applicant: Intel Corporation
    Inventors: Jih-Kwon Peir, Steve Y. Zhang, Scott H. Robinson, Konrad Lai, Wen-Hann Wang
  • Patent number: 6820186
    Abstract: Memory requests and responses thereto include a tag that has a shift value indicating the misalignment between the first byte of required packet data and the first byte of a line of data in memory. A packet buffer controller receiving data with an associated tag uses the shift value to shift the received line of data accordingly. The first line of data for the packet data payload is shifted accordingly and written into the packet buffer. Subsequent lines of data require masking the previous line of data except for the last N bytes where N equals the shift value. The shifted line of data is written over the previous line so that the lower order bytes of the shifted received line of data are written. Then the shifted line of data is written into the next line of the packet buffer.
    Type: Grant
    Filed: March 26, 2001
    Date of Patent: November 16, 2004
    Assignee: Sun Microsystems, Inc.
    Inventors: Thomas P. Webber, Hugh Kurth, Robert Dickson
  • Publication number: 20040221107
    Abstract: A multiprocessor write-into-cache data processing system includes a feature for preventing hogging of ownership of a first gateword stored in the memory which governs access to a first common code/data set shared by processes running in the processors by imposing first delays on all other processors in the system while, at the same time, mitigating any adverse effect on performance of processors attempting to access a gateword other than the first gateword. This is achieved by starting a second delay in any processor which is seeking ownership of a gateword other than the first gateword and truncating the first delay in all such processors by subtracting the elapsed time indicated by the second delay from the elapsed time indicated by the first delay.
    Type: Application
    Filed: April 30, 2003
    Publication date: November 4, 2004
    Inventors: Charles P. Ryan, Wayne R. Buzby
  • Patent number: 6799247
    Abstract: A remote memory processor architecture which provides an embedded processor with access to a large off-chip memory space via a HOST processor bus. An on-chip embedded memory provides a cache memory space.
    Type: Grant
    Filed: August 23, 2001
    Date of Patent: September 28, 2004
    Assignee: Cisco Technology, Inc.
    Inventor: Kenneth W. Batcher
  • Patent number: 6795905
    Abstract: An access transaction generated by a processor is configured using a configuration storage containing a configuration setting. The processor has a normal execution mode and an isolated execution mode. The access transaction has access information. Access to the configuration storage is controlled. An access grant signal is generated using the configuration setting and the access information. The access grant signal indicates if the access transaction is valid.
    Type: Grant
    Filed: September 29, 2000
    Date of Patent: September 21, 2004
    Assignee: Intel Corporation
    Inventors: Carl M. Ellison, Roger A. Golliver, Howard C. Herbert, Derrick C. Lin, Francis X. McKeen, Gilbert Neiger, Ken Reneris, James A. Sutton, Shreekant S. Thakkar, Millind Mittal
  • Patent number: 6766447
    Abstract: A method of initializing random access memory during a BIOS process executed by a processor that is configured to perform speculative reading. The ROM BIOS is modified such that speculative reading is prevented during the memory initialization.
    Type: Grant
    Filed: January 25, 2000
    Date of Patent: July 20, 2004
    Assignee: Dell Products L.P.
    Inventors: Stephen D. Jue, Matthew B. Mendelow
  • Patent number: 6766360
    Abstract: A computer network system for manipulating requests for shared data includes a plurality of groups and each group has a plurality of nodes and each node has a plurality of processors. The system further comprises a request outstanding buffer (ROB) for recording data requests, a remote access cache (RAC) for caching the results of prior memory requests which are remote to a requesting node, and a directory for recording a global state of a cache line in the system. The RAC supports only two states, Shared and Invalid, and caches only clean remote data. If the directory state is Modified/Exclusive, the line is indicated to not be in the RAC. The behavior of the RAC is described for two important cases: initial RAC does not have the line caches and initial RAC has the line cached.
    Type: Grant
    Filed: July 14, 2000
    Date of Patent: July 20, 2004
    Assignee: Fujitsu Limited
    Inventors: Patrick N. Conway, Yukihiro Nakagawa, Jung Rung Jiang
  • Patent number: 6760811
    Abstract: In a multiprocessor data processing system including: a memory, first and second shared caches, a system bus coupling the memory and the shared caches, first, second, third and fourth processors having, respectively, first, second, third and fourth private caches with the first and second private caches being coupled to the first shared cache, and the third and fourth private caches being coupled to the second shared cache, gateword hogging is prevented by providing a gate control flag in each processor. Priority is established for a processor to next acquire ownership of the gate control word by: broadcasting a “set gate control flag” command to all processors such that setting the gate control flags establishes delays during which ownership of the gate control word will not be requested by another processor for predetermined periods established in each processor.
    Type: Grant
    Filed: August 15, 2002
    Date of Patent: July 6, 2004
    Assignee: Bull HN Information Systems Inc.
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Patent number: 6757785
    Abstract: A method and system for allocating and storing data to a cache memory in each processor in a multiprocessor computer system. Data structures in main memory are partitioned into substructures that are classified as either exclusive substructures or sharing substructures. The exclusive substructures are cached exclusively by a specified processor, and the sharing substructures are cached by specified groups of processors in the multiprocessor computer.
    Type: Grant
    Filed: November 27, 2001
    Date of Patent: June 29, 2004
    Assignee: International Business Machines Corporation
    Inventors: Michael Brian Brutman, Mahdad Majd
  • Patent number: 6754774
    Abstract: A system includes a memory, a sequencer, and a set of application engines in communication with the sequencer and memory. The set of application engines includes a streaming output engine with a storage engine, alignment circuit, and data buffer. The storage engine includes a memory opcode output and memory address output in communication with the memory. The storage engine employs these outputs to access the memory by supplying memory transaction opcodes and memory addresses. The alignment circuit receives data from other application engines in the set of application engines. In operation, the alignment circuit aligns data transfers from an application engine into a data word. The data buffer stores data words from the alignment circuit and transfers them to locations accessed in the memory by the storage engine.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: June 22, 2004
    Assignee: Juniper Networks, Inc.
    Inventors: Fred Gruner, Ricardo Ramirez
  • Patent number: 6745310
    Abstract: A memory system (100) and method of operating the same to provide real-time local and remote management of the memory system are described. Generally, the memory system (100) includes a memory matrix (110) having a number of memory devices (200) arranged in banks (205) each with a predetermined number of memory devices, a memory controller (210) coupled to the banks to access the devices, and a cache (215) having stored therein an allocation table for describing files and directories of data stored in the memory devices. The controller (211) is configured to provide management and status reporting of the memory matrix (110) independent of a data processing system coupled to the memory matrix through a network (120). Preferably, the controller (210) is configured to calculate statistics related to operation of the memory matrix (110) and to provide the statistics to an administrator.
    Type: Grant
    Filed: November 30, 2001
    Date of Patent: June 1, 2004
    Inventors: Yan Chiew Chow, James R. Hsia
  • Publication number: 20040064645
    Abstract: In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Publication number: 20040034741
    Abstract: In a multiprocessor data processing system including: a memory, first and second shared caches, a system bus coupling the memory and the shared caches, first, second, third and fourth processors having, respectively, first, second, third and fourth private caches with the first and second private caches being coupled to the first shared cache, and the third and fourth private caches being coupled to the second shared cache, gateword hogging is prevented by providing a gate control flag in each processor. Priority is established for a processor to next acquire ownership of the gate control word by: broadcasting a “set gate control flag” command to all processors such that setting the gate control flags establishes delays during which ownership of the gate control word will not be requested by another processor for predetermined periods established in each processor.
    Type: Application
    Filed: August 15, 2002
    Publication date: February 19, 2004
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Patent number: 6675277
    Abstract: A method and apparatus for using a memory adapter may allow a system to access the memory adapter. The memory adapter may comprise a list of entries for data within the memory adapter. Each entry may include an adapter memory segment offset, a segment length, a segment status, and a corresponding system memory address. The adapter memory segment offset may be the location of the offset within the memory adapter. A processor may access the adapter memory segment offsets through the system address space. The method and apparatus may be used to perform functions, such as read adapter memory, write adapter memory, insert adapter memory segment, remove an adapter memory segment, scan for an adapter memory segment, scan for a removable adapter memory segment, and potentially other functions.
    Type: Grant
    Filed: July 25, 2001
    Date of Patent: January 6, 2004
    Assignee: TNS Holdings, Inc.
    Inventors: Karlon K. West, Lynn P. West
  • Patent number: 6675261
    Abstract: A request, such as those embedded in URLs and XML documents, is assigned to a thread of execution in a server that is in communication with a data store. The thread of execution includes a thread local storage with a pointer to a cache object. The cache object maintains copies of data store entries frequently accessed by the assigned request. The cache object is accessed in response to data store access commands arising from the request. When a data store access command specifies a data store entry not found in the cache object, the server creates and loads a corresponding cache object entry. The cache object is not updated when other requests alter data store entries, and memory access commands arising from other requests cannot cause the cache object to be accessed. When the request causes the server to write data to the data store, the cache object also maintains a copy of the written data.
    Type: Grant
    Filed: November 30, 2001
    Date of Patent: January 6, 2004
    Assignee: Oblix, Inc.
    Inventor: Michael J. Shandony
  • Patent number: 6662272
    Abstract: A cache-based system is adapted for dynamic cache partitioning. A cache is partitioned into a plurality of cache partitions for a plurality of entities. Each cache partition can be assigned as a private cache for a different entity. If a first cache partition satisfying a first predetermined cache partition condition and a second cache partition satisfying a second predetermined cache partition condition are detected, then the size of the first cache partition is increased by a predetermined segment and the size of the second cache partition is decreased by the predetermined segment. An entity can perform cacheline replacement exclusively in its assigned cache partition, and also be capable of reading any cache partition.
    Type: Grant
    Filed: September 29, 2001
    Date of Patent: December 9, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Sompong P. Olarig, Phillip M. Jones, John E. Jenne