User Data Cache Patents (Class 711/126)
-
Patent number: 7107422Abstract: A master dynamic aging value associated with the client is calculated responsive to an operator command to perform the global refresh. When receiving a request from a user associated with the client, a sub-dynamic aging value associated with a cached user security profile for the user is compared to the master dynamic aging value. If the sub-dynamic aging value is not equal to the master dynamic aging value, then the cached user security profile is refreshed.Type: GrantFiled: August 23, 2002Date of Patent: September 12, 2006Assignee: International Business Machines CorporationInventors: Michael R. Artobello, Getachew G. Birbo, Joyce Yuan-Sheng Hsiao, Sandra L. Sherrill, Andrew D. Tollerud, Jack Chiu-Chiu Yuan, James E. Zimmer
-
Patent number: 7039672Abstract: Systems and methods are provided for delivering content in a communications system. A communications system is provided that includes a plurality of terminals and a content director. The terminals are adapted to communicate via a communications network, and are each capable of requesting and thereafter receiving content via the communications network. The content director, on the other hand, is capable of receiving a request for content from one terminal. The content director can then push the content to the terminal based upon the request, and push the content to at least one other terminal before the other terminals request the content. Each terminal can have an associated user interest correlation with each other terminal. In such instances, the content director can push the content to other terminals based upon the user interest correlation of the other terminals with respect to the terminal requesting the content.Type: GrantFiled: April 8, 2003Date of Patent: May 2, 2006Assignee: Nokia CorporationInventors: Tao Wu, Sadhna Ahuja, Sudhir Sharan Dixit
-
Patent number: 7020750Abstract: A hybrid system for updating cache including a first computer system coupled to a database accessible by a second computer system, said second computer system including a cache, a cache update controller for concurrently implementing a user defined cache update policy, including both notification based cache updates and periodic based cache updates, wherein said cache updates enforce data coherency between said database and said cache, and a graphical user interface for selecting between said notification based cache updates and said periodic based cache updates.Type: GrantFiled: September 17, 2002Date of Patent: March 28, 2006Assignee: Sun Microsystems, Inc.Inventors: Pirasenna Thiyagaranjan, Krishnendu Chakraborty, Peter D. Stout, Xuesi Dong
-
Patent number: 7003566Abstract: A method (and system) of predictive directional Web caching, includes detecting a first document accessed by a user, and predicting a subsequent document which, with a highest degree of probability, is likely to be retrieved based on the first document accessed.Type: GrantFiled: June 29, 2001Date of Patent: February 21, 2006Assignee: International Business Machines CorporationInventors: Christopher Frank Codella, Marcos Nogueira Novaes
-
Patent number: 6985999Abstract: A microprocessor prioritizes cache line fill requests according to request type rather than issuing the requests in program order. In one embodiment, the request types include blocking accesses at highest priority, non-blocking page table walk accesses at medium priority, and non-blocking store allocation and prefetch accesses at lowest priority. The microprocessor takes advantage of the fact that the core logic clock frequency is a multiple of the processor bus clock frequency, typically by an order of magnitude. The microprocessor accumulates the various requests generated by the core logic each core clock cycle during a bus clock cycle. The microprocessor waits until the last core clock cycle before the next bus clock cycle to prioritize the accumulated requests and issues the highest priority request on the processor bus.Type: GrantFiled: October 18, 2002Date of Patent: January 10, 2006Assignee: IP-First, LLCInventors: G. Glenn Henry, Rodney E. Hooker
-
Patent number: 6981112Abstract: An apparatus, program product and method utilize a cache payback parameter for selectively and dynamically disabling caching for potentially cacheable operations performed in connection with a memory. The cache payback parameter is tracked concurrently with the performance of a plurality of cacheable operations on a memory, and is used to determine the effectiveness, or potential payback, of caching in a particular implementation or environment. The selective disabling of caching, in turn, is applied at least to future cacheable operations based upon a determination that the cache payback parameter meets a caching disable threshold.Type: GrantFiled: August 26, 2002Date of Patent: December 27, 2005Assignee: International Business Machines CorporationInventors: Armin Harold Christofferson, Leon Edward Gregg, James Lawrence Tilbury
-
Patent number: 6981103Abstract: A cache memory control apparatus (20) that may control a cache memory (100) has been disclosed. Cache memory control apparatus (20) may include a control section (21). When a cache miss occurs, a refill request for a line (118) of data may be executed. In response to the refill request, control section (21) may perform control to make a valid bit (103) and a TAG portion (102), corresponding to line (118) of data to be refilled, invalid. This may occur while accessing the address corresponding to the cache miss from an external memory (200). In this way, if a reset occurs during the refill operation, a cache memory control apparatus (20) may recover a cache memory to a state before resetting in a reduced time period. Upon completion of the refill operation, valid bit (103) and TAG portion (102) may be updated.Type: GrantFiled: June 10, 2002Date of Patent: December 27, 2005Assignee: NEC Electronics CorporationInventor: Satoko Nakamura
-
Patent number: 6968569Abstract: A data broadcast receiving apparatus includes a storage controlling unit and a reproduction controlling unit. The storage controlling unit stores data modules among a plurality of data modules included in received broadcast data, into a module storing unit and also stores storage information for each of the plurality of data modules into a storage information storing unit, the storage information showing the presence or absence of the data module in the module storing unit, a reason of the absence of the data module, and the like. When the user selects a data module as a reproduction target, the reproduction controlling unit judges whether the data module is stored unit, based on storage information of the data module. If the data module is not stored, the reproduction controlling unit displays a message informing the user of the fact and reason that the data module is not stored.Type: GrantFiled: February 5, 2001Date of Patent: November 22, 2005Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Akihiro Tanaka, Naoya Takao, Koichiro Yamaguchi, Rikiya Masuda
-
Patent number: 6934811Abstract: A cache is provided which has low power dissipation. An execution engine generates a sequential fetch signal indicating that data required at a next cycle is stored at a next location of just previously used data. A line reuse buffer is provided which stores data that is stored in a data memory of the cache and is in the same cache line as data just previously used by the execution engine. In the case where the sequential fetch signal is received and data required according to a memory request signal is stored in the same cache line of the data memory as the just previously used data, a cache controller fetches data from the line reuse buffer and controls the cache so as to stay in a stand-by mode.Type: GrantFiled: March 7, 2002Date of Patent: August 23, 2005Assignee: Samsung Electronics, Co., Ltd.Inventor: Sang-Yeun Cho
-
Patent number: 6931494Abstract: Systems and methods that provide directional prefetching are provided. In one embodiment, a method may include one or more of the following: storing a first block and a second block in a prefetch buffer; associating a first block access with a backward prefetch scheme; associating a second block access with a forward prefetch scheme; and, if the first block is accessed before the second block, then performing a backward prefetch with respect to the first block.Type: GrantFiled: November 14, 2002Date of Patent: August 16, 2005Assignee: Broadcom CorporationInventors: Kimming So, Jin Chin Wang
-
Patent number: 6918008Abstract: A cache is configured to select a cache block for eviction in response to detecting a cache miss. The cache transmits the address of the cache block as a write transaction on an interface to the cache, and the cache captures the address from the interface and reads the cache block from the cache memory in response to the address. The read may occur similar to other reads in the cache, detecting a hit in the cache (in the cache storage location from which the cache block is being evicted). The write transaction is initiated before the corresponding data is available for transfer, and the use of the bus bandwidth to initiate the transaction provides an open access time into the cache for reading the evicted cache block.Type: GrantFiled: December 30, 2003Date of Patent: July 12, 2005Assignee: Broadcom CorporationInventor: Joseph B. Rowlands
-
Patent number: 6912636Abstract: Systems, methods, apparatus and software can utilize an indirect write driver to prevent possible error conditions associated with using a third-party copy operation directed at a storage resource. A data transport mechanism such as a data restore application initiates a third-party copy operation from a data source to a data cache. The indirect write driver monitors write commands as they pass to a storage resource driver. If a command is found to be an indirect write command, e.g., a command designed to complete the movement of data from the data cache to the storage resource, it is handled accordingly. Normal write commands are passed on to the storage resource driver. By completing the data move operation using normal storage management channels, e.g., the operating system, file system, and/or volume manager, error conditions can be avoided.Type: GrantFiled: July 30, 2004Date of Patent: June 28, 2005Assignee: Veritas Operating CorporationInventors: Graham Bromley, James P. Ohr
-
Patent number: 6879998Abstract: A method for increasing transfer quality between a content requestor and a content source on a content distribution system. The method involves determining transfer quality between the requestor and various content sources. The determination is made from the requestor's perspective. After determining transfer qualities for the various content sources, the requester provides the transfer qualities to a selector on the content distribution system. The selector uses the determined transfer qualities select a content source to supply the requestor.Type: GrantFiled: September 18, 2000Date of Patent: April 12, 2005Assignee: Aerocast.com, Inc.Inventors: Nathan F. Raciborski, Mark R. Thompson
-
Patent number: 6865645Abstract: A method of supporting programs that include instructions that modify subsequent instructions in a multi-processor system with a central processing unit including an execution unit, and instruction unit and a plurality of caches including a separate instruction and operand cache.Type: GrantFiled: October 2, 2000Date of Patent: March 8, 2005Assignee: International Business Machines CorporationInventors: Chung-Lung Kevin Shum, Dean G. Bair, Charles F. Webb, Mark A. Check, John S. Liptay
-
Patent number: 6857052Abstract: Any of the processors CPU1 to CPUn turns the miss hit detecting signal line 5 to a low level upon detecting occurrence of a miss hit. In response, the mode switching controller 2 is notified of the occurrence of a miss hit and switches each of the processors CPU1 to CPUn to the synchronous operation mode. Also, each command from each of the processors CPU1 to CPUn is appended with a tag. When each of the processors CPU1 to CPUn feeds the synchronization detecting signal line 6 with the tags which are identical as designated as a synchronous point, the operation of the processors can be switched to the non-synchronous operation mode by the mode switching controller 2.Type: GrantFiled: February 21, 2002Date of Patent: February 15, 2005Assignee: Semiconductor Technology Academic Research CenterInventor: Hideharu Amano
-
Patent number: 6848035Abstract: A semiconductor device is designed to hide refresh operations even when the data width of a cache line differs from that of the external data bus in a memory that uses a cache memory and a DRAM with a plurality of banks. The semiconductor device includes a plurality of memory banks BANK0 to BANK127, each having a plurality of memory cells, as well as a cache memory CACHEMEM used to retain information read from the plurality of memory banks. The cache memory CACHEMEM includes a plurality of entries, each having a data memory DATAMEM and a tag memory TAGMEM. The data memory DATAMEM has a plurality of sub lines DATA0 to DATA3 and the tag memory TAGMEM has a plurality of valid bits V0 to V3 and a plurality of dirty bits D0 to D3.Type: GrantFiled: June 10, 2002Date of Patent: January 25, 2005Assignee: Renesas Technology Corp.Inventors: Satoru Akiyama, Yusuke Kanno, Takao Watanabe
-
Publication number: 20040260907Abstract: An instruction virtual address space includes only virtual addresses corresponding to physical addresses of address areas of a physical address space storing pages of only instructions. A data virtual address space includes only virtual addresses corresponding to physical addresses of address areas of the physical address space storing pages of only data. The instruction and data virtual address spaces use duplicated virtual addresses. Instruction and data address translation units translate virtual addresses of the instruction and data virtual address spaces into physical addresses of the single physical address space. A data access efficiency and an instruction execution speed can be improved.Type: ApplicationFiled: April 2, 2004Publication date: December 23, 2004Applicant: Sony CorporationInventor: Koji Ozaki
-
Patent number: 6795888Abstract: The invention includes a system and method for logging network server data such as data relating to client requests. In accordance with the invention, end users of a server program can create one or more logging modules, each having a predefined interface that is defined by the server program. In response to client requests, the server program calls logging modules that have been designated by a system administrator, and passes potential log data to the logging modules. In response to receiving the potential log data, each logging module makes its own decision regarding (a) whether to make a log entry, (b) which data should be included in the log entry, and (c) the format that is used for recording the log data. In this way, end users are not constrained to any given logging format or set of logging criteria.Type: GrantFiled: December 1, 1997Date of Patent: September 21, 2004Assignee: Microsoft CorporationInventors: Johnson R. Apacible, Kim Stebbens, Terence Kwan
-
Patent number: 6775735Abstract: Embodiments are provided in which a first and second instructions are executed in parallel. A first and a second address are generated according to the first and second instructions, respectively. The first address is used to select a data cache line of a data cache RAM and a first data bank from the data cache line. The second address is used to select a second data bank from the data cache. The first and second data banks are outputted in parallel from the data cache RAM. An instruction pair testing circuit tests the probability of the first and second instructions accessing a same data cache line of the data cache RAM. If it is unlikely that the two instructions will access a same data cache line, the second instruction is refetched and re-executed, and the second data bank is not used.Type: GrantFiled: October 11, 2001Date of Patent: August 10, 2004Assignee: International Business Machines CorporationInventor: David Arnold Luick
-
Publication number: 20040153608Abstract: Instead of integrating as previously a central and global unit in one module which processes all configuration requests, now there is a plurality of hierarchically (tree structure) arranged active units which can assume this task.Type: ApplicationFiled: January 24, 2004Publication date: August 5, 2004Inventors: Martin Vorbach, Robert Munch
-
Patent number: 6772290Abstract: Systems, methods, apparatus and software can utilize an indirect write driver to prevent possible error conditions associated with using a third-party copy operation directed at a storage resource. A data transport mechanism such as a data restore application initiates a third-party copy operation from a data source to a data cache. The indirect write driver monitors write commands as they pass to a storage resource driver. If a command is found to be an indirect write command, e.g., a command designed to complete the movement of data from the data cache to the storage resource, it is handled accordingly. Normal write commands are passed on to the storage resource driver. By completing the data move operation using normal storage management channels, e.g., the operating system, file system, and/or volume manager, error conditions can be avoided.Type: GrantFiled: August 20, 2002Date of Patent: August 3, 2004Assignee: Veritas Operating CorporationInventors: Graham Bromley, James P Ohr
-
Patent number: 6766427Abstract: A method and apparatus for loading data from memory to a cache is provided. The method and apparatus provide substantially improved performance, especially in conjunction with large data arrays for which each element of data is processed completely at once and need not be later accessed. A technique is provided to allow a data element to be loaded directly to a cache location corresponding to the local variable used to process that data element, thereby avoiding copying of the data element to multiple cache locations. In conjunction with the use of non-caching stores of processed results back into main memory, this technique completely avoids cache thrashing within the framework of a conventional microprocessor architecture. This technique is ideally suited for high-performance processing of streaming multimedia data including video processing.Type: GrantFiled: June 30, 2000Date of Patent: July 20, 2004Assignee: ATI International SRLInventors: Avery Wang, Richard W. Webb
-
Patent number: 6766313Abstract: A system for caching and retrieving information comprises a server having an information repository, a cache manager, and a server software module. The information repository receives and stores data that is to be served by the server, where such data is regularly updated from at least one external source. The server software module performs server functions including responding to at least some requests for a document from a requestor by retrieving data currently stored in the repository, rendering the document to include the retrieved data, and forwarding the rendered document to the requestor. The cache manager requests a document from the server software module, receives the requested document as rendered by the server software module to include the retrieved data currently stored in the repository, and caches the received document on a regular basis.Type: GrantFiled: July 12, 2000Date of Patent: July 20, 2004Assignee: Microsoft CorporationInventor: Paul K. Kromann
-
Patent number: 6763421Abstract: Embodiments are provided in which a first and second instructions are executed in parallel. A first and a second address are generated according to the first and second instructions, respectively. The first address is used to select a data cache line of a data cache RAM and a first data bank from the data cache line. The second address is used to select a second data bank from the data cache. The first and second data banks are outputted in parallel from the data cache RAM. An instruction pair testing circuit tests the probability of the first and second instructions accessing a same data cache line of the data cache RAM. If it is unlikely that the two instructions will access a same data cache line, the second instruction is refetched and re-executed, and the second data bank is not used.Type: GrantFiled: October 11, 2001Date of Patent: July 13, 2004Assignee: International Business Machines CorporationInventor: David Arnold Luick
-
Patent number: 6742033Abstract: A system, method and computer program product that pre-caches or downloads information from internet sites that the system expects the user to request. The system schedules the pre-caching to occur at the most appropriate time of day in order to increase the likelihood that the most recent information is provided to the user in a timely manner. Actual usage is monitored to adjust to user-changing habits, conserve resources at both the server and client ends, and prioritize information against interrupted downloads and exhausted or limited cache or memory space. For users that use the telephone to dial-in to the internet, the system and method pre-caches content in a manner which decreases the likelihood that the pre-caching process will interfere with the user's use of the telephone for other purposes.Type: GrantFiled: June 12, 2000Date of Patent: May 25, 2004Assignee: Gateway, Inc.Inventors: Kim C. Smith, Peter E. Martinez
-
Publication number: 20040044849Abstract: A data storage system having a non IC based memory and an IC based non-volatile memory for storing user data. In one example, the IC based non-volatile memory is implemented with MRAM. Examples of non IC based memory include e.g. hard disks, tape, and compact disks. In some examples, the IC based memory is utilized to store user data from an information device in order to increase the speed and/or the effective storage capacity of the data storage system. In some examples, a portion of a standard size block of user data can be stored on spaces of the non IC based memory that are deficient for storing a standard size block with the remaining portion being stored in IC based memory. Portions of a file of user data may be non-volatilely stored in the IC based memory in order to more quickly provide the file to an information device. For example, data of a file, that if stored in a location on the non IC based media would significantly increase the retrieval time of the file, can be stored in the IC based media.Type: ApplicationFiled: August 29, 2002Publication date: March 4, 2004Inventors: Ronald W. Stence, John P. Hansen, David A. Hayner
-
Publication number: 20040044866Abstract: An apparatus and method provide persistent data during a user session on a networked computer system. A global data cache is divided into three sections: trusted, protected, and unprotected. An authorization mechanism stores and retrieves authorization data from the trusted section of the global data store. A common session manager stores and retrieves data from the protected and unprotected sections of the global data cache. Using the authorization mechanism, software applications may verify that a user is authorized without prompting the user for authorization information. Using the common session manager, software applications may store and retrieve data to and from the global data store, allowing the sharing of data during a user session. After the user session terminates, the data in the global data cache corresponding to the user session is invalidated.Type: ApplicationFiled: August 29, 2002Publication date: March 4, 2004Applicant: International Business Machines CorporationInventor: James Casazza
-
Patent number: 6701412Abstract: One embodiment of the present invention provides a system that facilitates sampling a cache in a computer system, wherein the computer system has multiple central processing units (CPUs), including a measured CPU containing the cache to be sampled, and a sampling CPU that gathers the sample. During operation, the measured CPU receives an interrupt generated by the sampling CPU, wherein the interrupt identifies a portion of the cache to be sampled. In response to receiving this interrupt, the measured CPU copies data from the identified portion of the cache into a shared memory buffer that is accessible by both the measured CPU and the sampling CPU. Next, the measured CPU notifies the sampling CPU that the shared memory buffer contains the data, thereby allowing the sampling CPU to gather and process the data.Type: GrantFiled: January 27, 2003Date of Patent: March 2, 2004Assignee: Sun Microsystems, Inc.Inventors: Richard J. McDougall, Denis J. Sheahan
-
Patent number: 6684298Abstract: A cache and TLB layout and design leverage repeater insertion to provide dynamic low-cost configurability trading off size and speed on a per application phase basis. A configuration management algorithm dynamically detects phase changes and reacts to an application's hit and miss intolerance in order to improve memory hierarchy performance while taking energy consumption into consideration.Type: GrantFiled: November 9, 2000Date of Patent: January 27, 2004Assignee: University of RochesterInventors: Sandhya Dwarkadas, Rajeev Balasubramonian, Alper Buyuktosnoglu, David Albonesi
-
Patent number: 6678806Abstract: An apparatus and method for processing portions of data words are described. Data words read from or written to memory are temporarily stored in a data register. Each data word is configured to include multiple portions which can be processed separately. For example, each full data word can be 32 bits long and can include four byte-long data portions. The data word is temporarily stored in a data register for processing. An address register associated with the data register temporarily stores an address word associated with the portion of the data word to be processed. The address word includes an address pointer for the portion of the data word as well as a tag. The tag includes information used to extract the portion of the data word from the data word for processing or to insert the portion of the data word into the data word after processing. The information in the tag can include the size of the data word portion being processed and its location within the data word.Type: GrantFiled: August 23, 2000Date of Patent: January 13, 2004Assignee: ChipWrights Design, Inc.Inventor: John L. Redford
-
Publication number: 20030221068Abstract: A method and system for RAM data cache of information maintained in an SQL type of database. A subset of the SQL data, in the form of a lite cache is extracted and stored in RAM. The lite cache includes a record ID and one variable, although the SQL database includes a plurality of variables associated with the record ID. Information query is directed first to the cache and if the cache exists, responses are returned from the cache rather than from the SQL database. Multiple lite cache are created with some initialized upon server load, and other initialized upon the first query. Update to the lite cache is provided on a periodic basis, or upon change in underlying data.Type: ApplicationFiled: May 23, 2002Publication date: November 27, 2003Inventors: Michael Tsuji, Anil Mallidi, Hok Yee Wong
-
Publication number: 20030204685Abstract: A method and apparatus within a processing system is provided for separating access to an instruction memory and a data memory to allow concurrent access by different pipeline stages within the processing system to both the instruction memory and the data memory. An instruction memory interface is provided to access the instruction memory. A data memory interface is provided to access the data memory. Redirection logic is provided to determine whether an access by the data memory interface should be directed to the instruction memory interface utilizing either the address of the access, or the type of instruction that is executing. If the access is redirected, the access to the instruction memory is performed by the instruction memory interface, and data retrieved by the instruction memory interface is then provided to the data memory interface, and in turn to the pipeline stage within the processing system that requested the data memory interface to access the data.Type: ApplicationFiled: April 26, 2002Publication date: October 30, 2003Applicant: MIPS Technologies, Inc.Inventors: Gideon D. Intrater, Anders M. Jagd, Ryan C. Kinter
-
Publication number: 20030191902Abstract: A system (10) uses shared resources (44, 54) to perform conventional load/store operations, to preload custom data from external sources, and to efficiently manage error handling in a cache (42, 52, 48). A reload buffer (44, 54) is used in conjunction with a cache (42, 52) operating in a write-through mode to permit lower level memory in the system to operate in a more efficient write-back mode. A control signal (70) selectively enables the pushing of data into the cache (42, 52, 48) from an external source. The control signal utilizes one or more attribute fields that provide functional information and define memory characteristics.Type: ApplicationFiled: April 5, 2002Publication date: October 9, 2003Inventors: Michael D. Snyder, Magnus K. Bruce, Jamshed Jalal, Thomas A Hoy
-
Patent number: 6622207Abstract: A cache memory is updated with audio samples in a manner which minimizes system bus bandwidth and cache size requirements. The end of a loop is used to truncate a normal cache request to exactly what is needed. A channel with a loopEnd in a request will be given higher priority in a two-stage priority scheme. The requested data is conformed by trimming to the minimum data block size of the bus, such a doubleword for a PCI bus. The audio data written into the cache can be shifted on a byte-wise basis, and unneeded bytes can be blocked and not written. Request data for which a bus request has been issued can be preempted by request data attaining a higher priority before a bus grant is received.Type: GrantFiled: September 5, 2000Date of Patent: September 16, 2003Assignee: Creative Technology Ltd.Inventor: David P. Rossum
-
Patent number: 6598122Abstract: A redundantly threaded processor is disclosed having an Active Load Address Buffer (“ALAB”) that ensures efficient replication of data values retrieved from the data cache. In one embodiment, the processor comprises a data cache, instruction execution circuitry, and an ALAB. The instruction execution circuitry executes instructions in two or more redundant threads. The threads include at least one load instruction that causes the instruction execution circuitry to retrieve data from the data cache. The ALAB includes entries that are associated with data values that a leading thread has retrieved. The entries include a counter field that is incremented when the instruction execution circuitry retrieves the associated data value for the leading thread, and that is decremented with the associated data value is retrieved for the trailing thread. The entries preferably also include an invalidation field which may be set to prevent further incrementing of the counter field.Type: GrantFiled: April 19, 2001Date of Patent: July 22, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Shubhendu S. Mukherjee, Steven K. Reinhardt
-
Publication number: 20030126368Abstract: One embodiment of a distributed memory module cache includes tag memory and associated logic implemented at the memory controller end of a memory channel. The memory controller is coupled to at least one memory module by way of a point-to-point interface. The data cache and associated logic are located in one or more buffer components on each of the memory modules. The tag look-ups are performed in parallel with the memory module decodes. This improves latency for cache hits without penalizing the latency for cache misses.Type: ApplicationFiled: December 31, 2001Publication date: July 3, 2003Inventor: Howard S. David
-
Patent number: 6581147Abstract: Interface circuitry is disclosed for interfacing between an operational circuit, a microprocessor, for example, and data storage circuitry, for example direct Rambus memory. The interface circuitry comprises buffer circuitry coupled between the operational circuitry and the data storage circuitry which is arranged to store data access requests received from the operational circuitry and to store data retrieved from the data storage circuitry. The buffer circuitry comprises an output for supplying the data access request signals to the data storage circuitry and to supply the stored data from the data storage circuitry to the operational circuitry. In use, the number of stored data access request signals decreases as the amount of stored data from the data storage circuitry increases. Similarly, the number of stored data access request signals increases as the amount of stored data from the data storage circuitry decreases.Type: GrantFiled: January 11, 2000Date of Patent: June 17, 2003Assignee: STMicroelectronics LimitedInventor: Fabrizio Rovati
-
Patent number: 6574714Abstract: A method of maintaining coherency in a cache hierarchy of a processing unit of a computer system, wherein the upper level (L1) cache includes a split instruction/data cache. In one implementation, the L1 data cache is store-through, and each processing unit has a lower level (L2) cache. When the lower level cache receives a cache operation requiring invalidation of a program instruction in the L1 instruction cache (i.e., a store operation or a snooped kill), the L2 cache sends an invalidation transaction (e.g., icbi) to the instruction cache. The L2 cache is fully inclusive of both instructions and data. In another implementation, the L1 data cache is write-back, and a store address queue in the processor core is used to continually propagate pipelined address sequences to the lower levels of the memory hierarchy, i.e., to an L2 cache or, if there is no L2 cache, then to the system bus. If there is no L2 cache, then the cache operations may be snooped directly against the L1 instruction cache.Type: GrantFiled: February 12, 2001Date of Patent: June 3, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, John Steven Dodson, Guy Lynn Guthrie
-
Patent number: 6574711Abstract: It is an object of the present invention to provide a semiconductor integrated circuit having a chip layout that reduces line length to achieve faster processing. A cache comprises a TAG memory module and a cache data memory module. The cache data memory module is divided into first and second cache data memory modules which are disposed on both sides of the TAG memory module, and input/output circuits of a data TLB are opposed to the input/output circuit of the TAG memory module and the input/output circuits of the first and second cache data memory modules across a bus area to reduce the line length to achieve faster processing.Type: GrantFiled: December 22, 2000Date of Patent: June 3, 2003Assignee: Matsushita Electric Industrial Co., Ltd.Inventor: Masaya Sumita
-
Patent number: 6570885Abstract: Defines and handles segments in messages to place pauses and interruptions within the communication of a message between transmitted segments of the message. A port cache of the destination node of each transmitted message obtains a message control block (MCB) which is used to control the reception of inbound segments within each message sent or received by the node. Each MCB stays in the cache only while its message is being communicated to the port and may be castout between segments in its message when there is no empty cache entry to receive a MCB for a current message being communicated but not having its MCB in the cache.Type: GrantFiled: November 12, 1999Date of Patent: May 27, 2003Assignee: International Business Machines CorporationInventor: Thomas Anthony Gregg
-
Patent number: 6567863Abstract: A coupler for a programmable logic controller connecting to an Ethernet network under the TCP/IP protocol in order to communicate with various equipment. The coupler uses two disk partitions in a flash memory, one acting as a disk for the real time operating system and the other acting as a user disk. The two disks are accessed through the FTP protocol on TCP/IP, and the user disk space is managed by an HTTP server.Type: GrantFiled: August 7, 2000Date of Patent: May 20, 2003Assignee: Schneider Electric Industries SAInventors: Alain Lafuite, Jean-Jacques Genin
-
Patent number: 6553473Abstract: An apparatus and method within a pipeline microprocessor are provided for allocating a cache line within an internal data cache upon a write miss to the data cache. The that apparatus and method allow data to be written to the allocated cache line before fill data for the allocated cache line is received from external memory over a system bus. The apparatus includes write allocate logic and a write buffer. The write allocate logic allocates the cache line within the data cache, it stores data corresponding to the write miss within the allocated cache line, and queues a speculative write command directing an external bus to store said the data to the external memory in the event that transfer of the fill data is interrupted. The speculative write command is stored in the write buffer and, in the event of an interruption such as a bus snoop to the allocated cache line, the write buffer issues the speculative write command to the system bus, thereby writing the data to external memory.Type: GrantFiled: March 30, 2000Date of Patent: April 22, 2003Assignee: IP-First, LLCInventors: Darius D. Gaskins, Rodney E. Hooker
-
Patent number: 6553477Abstract: A microprocessor is equipped with an address translation mechanism for performing dynamic address translation from a virtual address to a physical address on a page-by-page basis. The microprocessor includes a large-capacity low-associativity address translation buffer, and is capable of avoiding limitations imposed on a TLB entry lock function, while reducing the overhead for address translation. The address translation mechanism comprises an address translation buffer having an entry lock function, and control logic for controlling the operation of the address translation buffer. The address translation buffer includes a lower-level buffer organized as a lower-level hierarchy of the address translation buffer and having no entry lock function, and a higher-level buffer organized as a higher-level hierarchy of the address translation buffer and having an entry lock function, the higher-level buffer having higher associativity than the associativity of the lower-level buffer.Type: GrantFiled: November 6, 2000Date of Patent: April 22, 2003Assignee: Fujitsu LimitedInventors: Murali V. Krishna, Vipul Parikh, Michael Butler, Gene Shen, Masahito Kubo
-
Publication number: 20030074530Abstract: A load/store unit comprising a load/store buffer and a memory access buffer. The load store buffer is coupled to a data cache and is configured to store information on memory operations. The memory access buffer is configured to store addresses and data associated with the requested addresses for at least one of the most recent memory operations. The memory access buffer, upon detecting a load memory operation, outputs data associated with the load memory operation's requested address. If the requested address is not stored within the memory access buffer, the memory access buffer is configured to store the load memory operation's requested address and associated data when it becomes available from the data cache. Similarly, store memory operation requested address and associated data is also stored.Type: ApplicationFiled: December 11, 1997Publication date: April 17, 2003Inventors: RUPAKA MAHALINGAIAH, AMIT GUPTA
-
Patent number: 6549990Abstract: A processor employing a dependency link file. Upon detection of a load which hits a store for which store data is not available, the processor allocates an entry within the dependency link file for the load. The entry stores a load identifier identifying the load and a store data identifier identifying a source of the store data. The dependency link file monitors results generated by execution units within the processor to detect the store data being provided. The dependency link file then causes the store data to be forwarded as the load data in response to detecting that the store data is provided. The latency from store data being provided to the load data being forwarded may thereby be minimized. Particularly, the load data may be forwarded without requiring that the load memory operation be scheduled.Type: GrantFiled: May 21, 2001Date of Patent: April 15, 2003Assignee: Advanced Micro Devices, Inc.Inventors: William Alexander Hughes, Derrick R. Meyer
-
Patent number: 6549985Abstract: A data cache in an in-order single-issue microprocessor that detects cache misses generated by instructions behind a stalled instruction in the microprocessor pipeline and issues memory requests on the processor bus for the missing data so as to overlap with resolution of the stalled instruction, which may also be a cache miss, is provided. The data cache has pipeline stages that parallel portions of the main pipeline in the microprocessor. The data cache employs replay buffers to save the state, i.e., instructions and associated data addresses, of the parallel data cache stages so that instructions above the stalled instruction can continue to proceed down through the data cache and access the cache memory to generate cache misses. The data cache restores the data cache pipeline stages upon detection that stall will terminate. The data cache also detects TLB misses generated by instructions subsequent to the stalled instruction and overlaps page table walks with the stall resolution.Type: GrantFiled: March 30, 2000Date of Patent: April 15, 2003Assignee: I P - First, LLCInventors: Darius D. Gaskins, G. Glenn Henry, Rodney E. Hooker
-
Publication number: 20030046494Abstract: Method and apparatus for conditioning program control flow on the presence of requested data in a cache memory. In a data processing system that includes a cache memory and a system memory coupled to a processor, in various embodiments program control flow is conditionally changed based on whether the data referenced in an instruction are present in the cache memory. When an instruction that includes a data reference and an alternate control path is executed, the control flow of the program is changed in accordance with the alternate control path if the referenced data are not present in the cache memory. The alternate control path is either explicitly specified or implicit in the instruction.Type: ApplicationFiled: August 29, 2001Publication date: March 6, 2003Inventor: Michael L. Ziegler
-
Patent number: 6516387Abstract: A set-associative cache having a selectively configurable split/unified mode. The cache may comprise a memory and control logic. The memory may be configured for storing data buffered by the cache. The control logic may be configured for controlling the writing and reading of data to and from the memory. The control logic may organise the memory as a plurality of storage sets, each set being mapped to a respective plurality of external addresses such that data from any of said respective external addresses maps to that set. The control logic may comprise allocation logic for associating a plurality of ways uniquely with each set, the plurality of ways representing respective plural locations for storing data mapped to that set. In the unified mode, the control logic may assign a first plurality of ways to each set to define a single cache region.Type: GrantFiled: July 30, 2001Date of Patent: February 4, 2003Assignee: LSI Logic CorporationInventor: Stefan Auracher
-
Patent number: 6513104Abstract: An apparatus and method within a pipeline microprocessor are provided for allocating a cache line within an internal data cache upon a write miss to the data cache. The apparatus and method allow data to be written to the allocated cache line before fill data for the allocated cache line is received from external memory over a system bus. The apparatus includes write allocate logic and a fill controller. The write allocate logic stores first bytes within the cache line corresponding to the write, and updates remaining bytes of the cache line from memory. The fill controller is coupled to the write allocate logic. The fill controller issues a fill command over the system bus directing the external memory to provide the remaining bytes, where the fill command is issued in parallel with storage of the first bytes within the cache line.Type: GrantFiled: March 29, 2000Date of Patent: January 28, 2003Assignee: I.P-First, LLCInventor: Darius D. Gaskins
-
Patent number: 6513099Abstract: A cache for AGP based computer systems is provided. The graphics cache is included as part of a memory bridge between a processor, a system memory and a graphics processor. A cache controller within the memory bridge detects requests by the processor to store graphics data in the system memory. The cache controller stores the data for these requests in the graphics cache and in the system memory. The cache controller searches the graphics cache each time it receives a request from the graphics controller. If the a cache hit occurs, the cache controller returns the data stored in the graphics cache. Otherwise the request is performed using the system memory. In this way the graphics cache reduces the traffic between the system memory and the memory bridge, overcoming an important performance bottleneck for many graphics systems.Type: GrantFiled: December 22, 1998Date of Patent: January 28, 2003Assignee: Silicon Graphics IncorporatedInventors: Jeffery M. Smith, Daniel J. Yau