Multiport Cache Patents (Class 711/131)
  • Patent number: 6647465
    Abstract: A CPU system having a built-in cache memory system in which a write-only port for coherence control from the common system side and an access port from the CPU side are isolated through a multi-port configuration of the cache memory system inside CPU. A common memory on the common side too, uses a 2-port system structure with the CPU system in the form of a broadcast type connection form.
    Type: Grant
    Filed: July 27, 2001
    Date of Patent: November 11, 2003
    Assignee: Hitachi, Ltd.
    Inventors: Masatsugu Kametani, Kazuhiro Umekita, Terunobu Funatsu
  • Patent number: 6629206
    Abstract: A Harvard-architecture computer system includes a processor, an instruction cache, a data cache, and a write buffer. The caches are both set-associative in that they each have plural memories; both caches perform parallel reads by default. In a parallel read, all cache-memory locations of the selected cache corresponding to the set ID and word position bits of a requested read address are accessed in parallel while it is determined whether or not one of these locations has a tag matching the tag portion of the requested read address. If there is a “hit” (match), then an output multiplexer selects the appropriate cache memory for providing its data to the processor. The parallel read thus achieves faster reads, but expends extra power in accessing non-matching sets. A cache receiving a read request while the processor is waited performs a serial read instead of a parallel read. In a serial read, the tag match is performed before the data is accessed.
    Type: Grant
    Filed: December 31, 1999
    Date of Patent: September 30, 2003
    Assignee: Koninklijke Philips Electronics N.V.
    Inventor: Mark W. Johnson
  • Patent number: 6622219
    Abstract: A shared write back buffer for storing data from a data cache to be written back to memory. The shared write back buffer includes a plurality of ports, each port being associated with one of a plurality of processing units. All processing units in the plurality share the write back buffer. The shared write back buffer further includes a data register for storing data provided through the input ports, an address register for storing addresses associated with the data provided through the input ports, and a single output port for providing the data to the associated addresses in memory.
    Type: Grant
    Filed: April 26, 2002
    Date of Patent: September 16, 2003
    Assignee: Sun Microsystems, Inc.
    Inventors: Marc Tremblay, Andre Kowalczyk, Anup S. Tirumala
  • Publication number: 20030163643
    Abstract: A system and method are disclosed which enable resolution of conflicts between memory access requests in a manner that allows for efficient usage of cache memory. In one embodiment, a circuit comprises a cache memory structure comprising multiple banks, and a plurality of access ports communicatively coupled to such cache memory structure. The circuit further comprises circuitry operable to determine a bank conflict for pending access requests for the cache memory structure, and circuitry operable to issue at least one access request to the cache memory structure out of the order in which it was requested, responsive to determination of a bank conflict.
    Type: Application
    Filed: February 22, 2002
    Publication date: August 28, 2003
    Inventors: Reid James Riedlinger, Dean A. Mulla, Tom Grutkowski
  • Patent number: 6609174
    Abstract: Processing equipment with embedded MRAMs, and a method of fabricating, including a data processing device fabricated on a semiconductor chip with MRAM cells fabricated on the chip to form one to all of the memories on the chip. Also included is a dual bank memory in communication with the data processing device and circuitry coupled to the data processing device and the dual bank memory for providing simultaneous read access to the dual bank memory.
    Type: Grant
    Filed: October 19, 1999
    Date of Patent: August 19, 2003
    Assignee: Motorola, Inc.
    Inventor: Peter K. Naji
  • Publication number: 20030154347
    Abstract: A method for reducing power consumption within a processing architecture, the processing architecture including a processor and a memory device, the memory device having a memory cell, the processor having a processing element, the processor configured to read from the memory device and write to the memory device is described. The method comprises configuring the memory with logical processing circuits internal to the memory device which access the memory cell, performing logical operations to data within the memory cell utilizing the logical processing circuits within the memory device, and performing mathematical operations within the processing element of the processor. The method is embodied through a logic memory which significantly reduces power consumption of digital signal processors, microprocessors, micro-controllers or other computation engines in electronic systems.
    Type: Application
    Filed: July 10, 2002
    Publication date: August 14, 2003
    Inventors: Wei Ma, Jie Liang, Kah Yong Lee, Kiak Wei Khoo
  • Patent number: 6604174
    Abstract: The present invention provides a performance based system and method for dynamic allocation of a unified multiport cache. A multiport cache system is disclosed that allows multiple single-cycle look ups through a multiport tag and multiple single-cycle cache accesses from a multiport cache. Therefore, multiple processes, which could be processors, tasks, or threads can access the cache during any cycle. Moreover, the ways of the cache can be allocated to the different processes and then dynamically reallocated based on performance. Most preferably, a relational cache miss percentage is used to reallocate the ways, but other metrics may also be used.
    Type: Grant
    Filed: November 10, 2000
    Date of Patent: August 5, 2003
    Assignee: International Business Machines Corporation
    Inventors: Alvar A. Dean, Kenneth J. Goodnow, Stephen W. Mahin, Wilbur D. Pricer, Dana J. Thygesen, Sebastian T. Ventrone
  • Patent number: 6604176
    Abstract: A memory system having a common memory region, such memory region including pair of control ports and a common DATA port. A switching network is provided having a pair of information ports, for: coupling information having a control portion and a DATA portion between: a first one of such pair of information ports; and, a first one of the control ports and the DATA port through a first switch section; and coupling information having a control portion and a DATA portion between: a second one information ports; and, a second one of the control ports and the DATA port though a second switch section. A pair of clocks is included. A first one of such clocks is fed to operate the first switch section in coupling the information through such first section and a second one of such clocks being fed to operate the second switch section in coupling the information through such first section.
    Type: Grant
    Filed: December 21, 2000
    Date of Patent: August 5, 2003
    Assignee: EMC Corporation
    Inventors: Christopher S. MacLellan, John K. Walton
  • Publication number: 20030115419
    Abstract: A method and apparatus for combining cost effectiveness of data signal ports sharing a common memory storage device with reliable data signal communication of data signal ports each having a dedicated memory storage device. In one embodiment, data signals are received at a number of data signal ports of a data signal communication platform. A data signal bandwidth capability of a memory storage device of the data communication platform is determined. Once the data signal bandwidth capability of the memory storage device is determined, the memory storage device is segmented to improve utilization of the data signal bandwidth capability. As a result, cost effectiveness of data signal ports sharing a common memory storage device and reliability of data signal communication of data signal ports each having a dedicated memory storage device is combined.
    Type: Application
    Filed: January 29, 2003
    Publication date: June 19, 2003
    Inventor: Erik Andersen
  • Publication number: 20030115418
    Abstract: Various embodiments of systems and methods for performing accumulation operations on block operands are disclosed. In one embodiment, an apparatus may include a memory, a functional unit that performs an operation on block operands, and a cache accumulator. The cache accumulator is configured to provide a block operand to the functional unit and to store the block result generated by the functional unit. The cache accumulator is configured to provide the block operand to the functional unit in response to an instruction that uses an address in the memory to identify the block operand. Thus, the cache accumulator behaves as both a cache and an accumulator.
    Type: Application
    Filed: December 19, 2001
    Publication date: June 19, 2003
    Inventor: Fay Chong
  • Publication number: 20030088736
    Abstract: A method of caching data. In one embodiment, the method is comprised of filling a cache with incoming data to a first level. The filling is at a rate relative to said incoming data. The method is further comprised of increasing the cache from the first level to an optimum level. Outputting of the incoming data is enabled subsequent to the cache attaining the first level. The method is further comprised of adjusting the level of said cache level concurrent with incoming data and data outputting. This adjusting prevents the level of the cache from exceeding a maximum cache level and prevents the level of the cache from decreasing below the first level, such that smooth and continuously-streaming outputting of said data is provided.
    Type: Application
    Filed: November 7, 2001
    Publication date: May 8, 2003
    Inventors: Hai-Fang Yun, Leonard McCrigler
  • Patent number: 6557078
    Abstract: The inventive cache uses a queuing structure which provides out-of-order cache memory access support for multiple accesses, as well as support for managing bank conflicts and address conflicts. The inventive cache can support four data accesses that are hits per clocks, support one access that misses the L1 cache every clock, and support one instruction access every clock. The responses are interspersed in the pipeline, so that conflicts in the queue are minimized. Non-conflicting accesses are not inhibited, however, conflicting accesses are held up until the conflict clears. The inventive cache provides out-of-order support after the retirement stage of a pipeline.
    Type: Grant
    Filed: February 21, 2000
    Date of Patent: April 29, 2003
    Assignees: Hewlett Packard Development Company, L.P., Intel Corporation
    Inventors: Dean A. Mulla, Terry L Lyon, Reid James Riedlinger, Thomas Grutkowski
  • Patent number: 6549984
    Abstract: An apparatus and method are disclosed for providing concurrent access to first storage area and a second storage area. According to one embodiment, a device includes the first storage area. The device and the second storage area are both coupled to a first bus and are coupled together by a dedicated second bus. According to one embodiment, a snoop operation on the first storage area be preferred concurrently with a snoop operation on the second storage area.
    Type: Grant
    Filed: December 17, 1997
    Date of Patent: April 15, 2003
    Assignee: Intel Corporation
    Inventors: Dan W. Patterson, Stephen H. Hunt
  • Patent number: 6546461
    Abstract: A FIFO memory device includes an embedded memory array having a write port and a read port and a quad-port cache memory device. The cache memory device has a unidirectional data input port, a unidirectional data output port, a first embedded memory port that is electrically coupled to the write port and a second embedded memory port that is electrically coupled to the read port. A data input register, a retransmit register, a data output register and a multiplexer are provided within the cache memory device. The data input register is responsive to a write address and has a data input electrically coupled to the data input port and a data output electrically coupled to the first embedded memory port. The retransmit register is responsive to a retransmit address and has a data input electrically coupled to the data input port.
    Type: Grant
    Filed: November 22, 2000
    Date of Patent: April 8, 2003
    Assignee: Integrated Device Technology, Inc.
    Inventors: Mario Au, Li-Yuan Chen
  • Publication number: 20030061447
    Abstract: A memory system architecture/interconnect topology that includes at least one point-to-point link between a master, and at least one memory subsystem. The memory subsystem includes a buffer device coupled to a plurality of memory devices. The memory system may be upgraded through dedicated point-to-point links and corresponding memory subsystems.
    Type: Application
    Filed: October 15, 2002
    Publication date: March 27, 2003
    Inventors: Richard E. Perego, Stefanos Sidiropoulos, Ely Tsern
  • Patent number: 6539457
    Abstract: The inventive cache manages address conflicts and maintains program order without using a store buffer. The cache utilizes an issue algorithm to insure that accesses issued in the same clock are actually issued in an order that is consistent with program order. This is enabled by performing address comparisons prior to insertion of the accesses into the queue. Additionally, when accesses are separated by one or more clocks, address comparisons are performed, and accesses that would get data from the cache memory array before a prior update has actually updated the cache memory in the array are canceled. This provides a guarantee that program order is maintained, as an access is not allowed to complete until it is assured that the most recent data will be received upon access of the array.
    Type: Grant
    Filed: February 21, 2000
    Date of Patent: March 25, 2003
    Assignee: Hewlett-Packard Company
    Inventors: Dean A. Mulla, Reid James Riedlinger, Thomas Grutkowski
  • Publication number: 20030056061
    Abstract: In accordance with an embodiment of the invention, a semiconductor memory includes a number of data ports each having a predetermined number of data bits. The memory further has a number of memory macros each including at least one memory array having rows and columns of memory cells. Each memory macro further includes a plurality of internal data connection points directly connected to external terminals to transfer data to or from the at least one memory array. The internal data connection points correspond in number to the number of the data ports, and the internal data connection points in the memory macros together form the plurality of data ports.
    Type: Application
    Filed: August 20, 2002
    Publication date: March 20, 2003
    Applicant: Alpine Microsystems, Inc.
    Inventor: David L. Sherman
  • Patent number: 6535963
    Abstract: A memory system usable for a multi-casting switch or similar device includes memory which can be dynamically allocated among two or more output ports. The memory includes a plurality of severally addressable subarrays with the subarrays being dynamically associated with various output ports as the need arises. When a received frame is to be output from two or more output ports in a multi-casting fashion, the frame is written in parallel to two of the subarrays associated respectively with the output ports. Frames are written in the subarrays in the order in which they are to be read-out and providing a certain degree of inherent queuing of the stored frames, reducing or eliminating the need for pointers to achieve the desired output order.
    Type: Grant
    Filed: June 30, 1999
    Date of Patent: March 18, 2003
    Assignee: Cisco Technology, Inc.
    Inventor: James P. Rivers
  • Patent number: 6532524
    Abstract: An apparatus comprising a first compare circuit, a second compare circuit and a memory. The first compare circuit may be configured to present a first match signal in response to a first address and a second address. The second compare circuit may be configured to present a second match signal in response to the first match signal, a first write enable signal and a second write enable signal. The memory may also be configured to present the first and second write enable signals. In one example, the memory may be configured to store and retrieve data with zero waiting cycles in response to the second match signal.
    Type: Grant
    Filed: March 30, 2000
    Date of Patent: March 11, 2003
    Assignee: Cypress Semiconductor Corp.
    Inventors: Junfei Fan, Jeffery Scott Hunt
  • Publication number: 20030023823
    Abstract: A dual-port memory controller having a memory controller and at least one delaying unit. Since the memory controller executes a data access by selecting one processor, the memory controller outputs at least one request disapproval signal indicating that it cannot accept data access requests from other processors. The delaying unit includes a clock oscillator, and flip-flops receiving the clock signal and delaying the request disapproval signal. The delaying unit varies the delay time by varying the clock frequency of the clock oscillator. The memory controller executes data access to the same memory area after a predetermined period of time elapses, so processors can read/write stabilized data.
    Type: Application
    Filed: June 18, 2002
    Publication date: January 30, 2003
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Hyo-seung Woo, Hak-seo Oh
  • Patent number: 6507894
    Abstract: In a processor having a cache memory, a mechanism is provided which efficiently realizes pre-fetch/post-store for the cache memory of a large quantity of data arrayed with stride (i.e., a regular increment) on a main memory. A cache memory is provided with plural ports. A first input/output port is connected to a cache memory controller of a processor core. A second input/output port is connected to a pre-fetch/post-store circuit outside the processor core. A portion of a memory area of the cache memory is associated with a specified physical space area of a main memory device in a one-for-one correspondence and is designed as a pre-fetch/post-store cache area dedicated to the specified physical space area. For the pre-fetch/post-store cache area, the pre-fetch/post-store circuit transfers data directly with the main memory device without interfering with the cache memory controller within the processor core.
    Type: Grant
    Filed: December 10, 1999
    Date of Patent: January 14, 2003
    Assignee: NEC Corporation
    Inventor: Noritaka Hoshi
  • Patent number: 6507892
    Abstract: The inventive cache processes multiple access requests simultaneously by using separate queuing structures for data and instructions. The inventive cache uses ordering mechanisms that guarantee program order when there are address conflicts and architectural ordering requirements. The queuing structures are snoopable by other processors of a multiprocessor system. The inventive cache has a tag access bypass around the queuing structures, to allow for speculative checking by other levels of cache and for lower latency if the queues are empty. The inventive cache allows for at least four accesses to be processed simultaneously. The results of the access can be sent to multiple consumers. The multiported nature of the inventive cache allows for a very high bandwidth to be processed through this cache with a low latency.
    Type: Grant
    Filed: February 21, 2000
    Date of Patent: January 14, 2003
    Assignees: Hewlett-Packard Company, Intel Corporation
    Inventors: Dean A. Mulla, Terry L Lyon, Reid James Riedlinger, Tom Grutkowski
  • Publication number: 20030005231
    Abstract: An access detector detects an access type of an access to one of a plurality of serial ports interfacing to serial storage devices. The access is intended to one of a plurality of parallel channels interfacing to parallel storage devices via task file registers of the parallel channels. A mapping circuit maps the serial ports to the parallel channels. A state machine emulates a response from the one of the parallel channels based on the access type and the mapped serial ports.
    Type: Application
    Filed: June 29, 2001
    Publication date: January 2, 2003
    Inventors: Eng Hun Ooi, Thien Ern Ooi, Chai Huat Gan
  • Publication number: 20020184447
    Abstract: The invention relates to a multiport-RAM memory device, comprising a RAM memory unit (1), a number of serial/parallel converters (5, 6, 7) and a parallel/serial converter (10), for converting serial signals into parallel signals. Said multiport-RAM memory device further comprises a control unit (11) and two timeslot allocation devices (8, 9), whereby an emulation of a number of connections by using the simple RAM memory unit (1) may be achieved. Furthermore, a power controller (12) can significantly reduce the power demand.
    Type: Application
    Filed: June 14, 2002
    Publication date: December 5, 2002
    Inventor: Heinrich Moeller
  • Patent number: 6480973
    Abstract: In a NUMA architecture, processors in the same CPU module with a processor opening a spin gate tend to have preferential access to a spin gate in memory when attempting to close the spin gate. This “unfair” memory access to the desired spin gate can result in starvation of processors from other CPU modules. This problem is solved by “balking” or delaying a specified period of time before attempting to close a spin gate whenever either one of the processors in the same CPU module just opened the desired spin gate, or when a processor in another CPU module is spinning trying to close the spin gate. Each processor detects when it is spinning on a spin gate. It then transmits that information to the processors in other CPU modules, allowing them to balk when opening spin gates.
    Type: Grant
    Filed: September 30, 1999
    Date of Patent: November 12, 2002
    Assignee: Bull Information Systems Inc.
    Inventors: William A. Shelly, David A. Egolf, Wayne R. Buzby
  • Publication number: 20020156966
    Abstract: A method is provided for using a dual port RAM to share data between microprocessors at high speed. By using status indicator flags, microprocessors are able to determine whether the data in memory is current and whether or not it had been utilized.
    Type: Application
    Filed: April 20, 2001
    Publication date: October 24, 2002
    Inventors: Alan R. Ward, Haitao Lin, Michael R. Lindsay, Michael D. Hesse
  • Patent number: 6457087
    Abstract: The system and method for operating a cache-coherent shared-memory multiprocessing system is disclosed. The system includes a number of devices including processors, a main memory, and I/O devices. Each device is connected by means of a dedicated point-to-point connection or channel to a flow control unit (FCU). The FCU controls the exchange of data between each device in the system by providing a communication path between two devices connected to the FCU. The FCU includes a snoop signal path for processing transactions affecting cacheable memory and a network of signal paths that are used to transfer data between devices. Each signal path can operate concurrently thereby providing the system with the capability of processing multiple data transactions simultaneously.
    Type: Grant
    Filed: May 12, 2000
    Date of Patent: September 24, 2002
    Assignee: Conexant Systems, Inc.
    Inventor: Daniel D. Fu
  • Patent number: 6457102
    Abstract: Storing data in a cache memory includes providing a first mechanism for allowing exclusive access to a first portion of the cache memory and providing a second mechanism for allowing exclusive access to a second portion of the cache memory, where exclusive access to the first portion is independent of exclusive access to the second portion. The first and second mechanisms may be software locks. Allowing exclusive access may also include providing a first data structure in the first portion of the cache memory and providing a second data structure in the second portion of the cache memory, where accessing the first portion includes accessing the first data structure and accessing the second portion includes accessing the second data structure. The data structures may doubly linked ring lists of blocks of data and the blocks may correspond to a track on a disk drive. The technique described herein may be generalized to any number of portions.
    Type: Grant
    Filed: November 5, 1999
    Date of Patent: September 24, 2002
    Assignee: EMC Corporation
    Inventors: Daniel Lambright, Adi Ofer, Natan Vishlitzky, Yuval Ofek
  • Patent number: 6449714
    Abstract: Each of plural rows in an aligned Instruction cache (AIC) contains a plurality of aligned sectors, each sector having space for a block of sequentially-addressed instructions in an executing program. A “fetch history table” (FHT) contains FHT sets of FHT entries for specifying execution sequences of the sectors in associated AIC rows. Each FHT entry in a FHT set specifies an AIC row and a sector sequence arrangement to be outputted from that row. In this manner, each FHT entry can associate itself with any row in the AIC and is capable of specifying any output order among the sectors in its associated row. Unique fields are selected in each instruction address for locating an associated FHT set, and for associating the instruction address with an AIC sector through a unique “sector distribution table” (SDT) to locate the sector which starts with the instruction having this instruction address.
    Type: Grant
    Filed: August 16, 1999
    Date of Patent: September 10, 2002
    Assignee: International Business Machines Corporation
    Inventor: Balaram Sinharoy
  • Patent number: 6446169
    Abstract: The present invention includes a microprocessor having a system bus for exchanging data with a computer system, and a private bus for exchanging data with a cache memory system. Since the processor exchanges data with the cache memory system through the private bus, cache memory operations thus do not require use of the system bus, allowing other portions of the computer system to continue to function through the system bus. Additionally, the cache memory and the processor are able to exchange data in a burst mode while the processor determines from the tag data when a read or write miss is occurring.
    Type: Grant
    Filed: August 31, 1999
    Date of Patent: September 3, 2002
    Assignee: Micron Technology, Inc.
    Inventor: Joseph T. Pawlowski
  • Patent number: 6446181
    Abstract: An apparatus having a core processor and a memory system is disclosed. The core processor includes at least one data port. The memory system is connected in such a way as to provide substantially simultaneous data accesses through the data port. The memory system can be made user configurable to provide appropriate memory model.
    Type: Grant
    Filed: March 31, 2000
    Date of Patent: September 3, 2002
    Assignees: Intel Corporation, Analog Devices, Inc.
    Inventors: Hebbalalu S. Ramagopal, David B. Witt, Michael Allen, Moinul Syed, Ravi Kolagotla, Lawrence A. Booth, Jr., William C. Anderson
  • Patent number: 6427190
    Abstract: A virtual memory system including a local-to-global virtual address translator for translating local virtual addresses having associated task specific address spaces into global virtual addresses corresponding to an address space associated with multiple tasks, and a global virtual-to-physical address translator for translating global virtual addresses to physical addresses. Protection information is provided by each of the local virtual-to-global virtual address translator, the global virtual-to-physical address translator, the cache tag storage, or a protection information buffer depending on whether a cache hit or miss occurs during a given data or instruction access. The cache is configurable such that it can be configured into a buffer portion or a cache portion for faster cache accesses.
    Type: Grant
    Filed: May 12, 2000
    Date of Patent: July 30, 2002
    Assignee: MicroUnity Systems Engineering, Inc.
    Inventor: Craig C. Hansen
  • Patent number: 6427189
    Abstract: A multi-level cache structure and associated method of operating the cache structure are disclosed. The cache structure uses a queue for holding address information for a plurality of memory access requests as a plurality of entries. The queue includes issuing logic for determining which entries should be issued. The issuing logic further comprises find first logic for determining which entries meet a predetermined criteria and selecting a plurality of those entries as issuing entries. The issuing logic also comprises lost logic that delays the issuing of a selected entry for a predetermined time period based upon a delay criteria. The delay criteria may, for example, comprise a conflict between issuing resources, such as ports. Thus, in response to an issuing entry being oversubscribed, the issuing of such entry may be delayed for a predetermined time period (e.g., one clock cycle) to allow the resource conflict to clear.
    Type: Grant
    Filed: February 21, 2000
    Date of Patent: July 30, 2002
    Assignees: Hewlett-Packard Company, Intel Corporation
    Inventors: Dean A. Mulla, Reid James Riedlinger, Tom Grutkowski
  • Patent number: 6427191
    Abstract: A novel on-chip cache memory and method of operation are provided which increase microprocessor performance. The cache design allows two cache requests to be processed simultaneously (dual-ported) and concurrent cache requests to be in-flight (pipelined). The design of the cache allocates a first clock cycle to cache tag and data access and a second cycle is allocated to data manipulation. The memory array circuit design is simplified because the circuits are synchronized to the main processor clock and do not need to use self-timed circuits. The overall logic control scheme is simplified because distinct cycles are allocated to the cache functions.
    Type: Grant
    Filed: December 31, 1998
    Date of Patent: July 30, 2002
    Assignee: Intel Corporation
    Inventors: John Wai Cheong Fu, Dean A. Mulla, Gregory S. Mathews
  • Patent number: 6415361
    Abstract: An apparatus for controlling a cache in a computing node, which is located between a node bus and an interconnection network to perform a cache coherence protocol, includes: a node bus interface for interfacing with the node bus; an interconnection network interface for interfacing with the interconnection network; a cache control logic means for controlling the cache to perform the cache coherence protocol; bus-side dual-port transaction buffers coupled between said node bus interface and said cache control logic means for buffering transaction requested and replied from or to local processors contained in the computing node; and network-side dual-port transaction buffers coupled between said interconnection network interface and said cache control logic for buffering transaction requested and replied from or to remote processors contained in another computing node coupled to the interconnection network.
    Type: Grant
    Filed: January 19, 2000
    Date of Patent: July 2, 2002
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sang Man Moh, Jong Seok Han, An Do Ki, Woo Jong Hahn, Suk Han Yoon, Gil Rok Oh
  • Patent number: 6405285
    Abstract: A method of improving memory access for a computer system, by sending load requests to a lower level storage subsystem along with associated information pertaining to intended use of the requested information by the requesting processor, without using a high level load queue. Returning the requested information to the processor along with the associated use information allows the information to be placed immediately without using reload buffers. A register load bus separate from the cache load bus (and having a smaller granularity) is used to return the information. An upper level (L1) cache may then be imprecisely reloaded (the upper level cache can also be imprecisely reloaded with store instructions). The lower level (L2) cache can monitor L1 and L2 cache activity, which can be used to select a victim cache block in the L1 cache (based on the additional L2 information), or to select a victim cache block in the L2 cache (based on the additional L1 information).
    Type: Grant
    Filed: June 25, 1999
    Date of Patent: June 11, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steven Dodson, Guy Lynn Guthrie
  • Patent number: 6405292
    Abstract: For a cache-coherent controller for a multiprocessor system sharing a cache memory, a split pending buffer having two components: a fully-associative part and an indexed part that can easily be made multi-ported. The associative part, PBA, include multiple entries having a valid bit and address fields, and the indexed part, PBC, includes entries including all the other status fields (i.e., the content part of the pending buffer entries). The split multi-ported pending buffer enables one request and one or more responses to be handled concurrently. Handling a request requires an associative lookup of PBA, a possible directory lookup, a possible read of PBC (in case of collision), and after processing the request in a request protocol handling unit, a possible PBA update, a possible PBC update, and a possible directory update, depending upon the cache coherence protocol implemented.
    Type: Grant
    Filed: January 4, 2000
    Date of Patent: June 11, 2002
    Assignee: International Business Machines Corp.
    Inventors: Douglas J. Joseph, Maged M. Michael, Ashwini Nanda
  • Patent number: 6401175
    Abstract: A shared write back buffer for storing data from a data cache to be written back to memory. The shared write back buffer includes a plurality of ports, each port being associated with one of a plurality of processing units. All processing units in the plurality share the write back buffer. The shared write back buffer further includes a data register for storing data provided through the input ports, an address register for storing addresses associated with the data provided through the input ports, and a single output port for providing the data to the associated addresses in memory.
    Type: Grant
    Filed: October 1, 1999
    Date of Patent: June 4, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Marc Tremblay, Andre Kowalczyk, Anup S. Tirumala
  • Patent number: 6397273
    Abstract: An assembler/disassembler mechanism in a data transfer pipeline receives data from FIFOs on a write operation to a memory, and transfers data to FIFOs on a read operation from memory. An enhanced parity mechanism is implemented in the assembler/disassembler to generate a pseudo-random number for each data byte. Enhanced parity generated in another pipeline component and accompanying data transferred from the FIFO (on a write to memory) is exclusive-ORed with LFSR data generated by enhanced parity circuitry in the assembler/disassembler. Integral registers provide a pathway for a respective line processor to access the data string, and allow the processor to access the memory. A counter mechanism counts/controls the amount of data read into and out of the FIFOS. Setting the amount of data to be transferred in and out of the FIFOs, allows data transfer to effectively run independent of the line processor.
    Type: Grant
    Filed: December 18, 1998
    Date of Patent: May 28, 2002
    Assignee: EMC Corporation
    Inventor: Kendell Alan Chilton
  • Patent number: 6389517
    Abstract: Apparatus and method to permit snoop filtering to occur while an atomic operation is pending. The snoop filtering apparatus includes first and second request queues and a cache. The first request queue tracks cache access requests, while the second request queue tracks snoops that have yet to be filtered. The cache includes a dedicated port for each request queue. The first port is dedicated to the first request queue and is a data-and-tag read-write port, permitting modification of both a cache line's data and tag. In contrast, the second port is dedicated to the second request queue and is a tag-only port. Because the second port is a tag-only port, snoop filtering can continue while a cache line is locked without fear of any modification of the data associated with the atomic address.
    Type: Grant
    Filed: February 25, 2000
    Date of Patent: May 14, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Anuradha N. Moudgal, Belliappa M. Kuttanna, Allan Tzeng
  • Patent number: 6389527
    Abstract: The present invention comprises a LSU which executes instructions relating to load/store. The LSU includes a DCACHE which temporarily stores data read from and written to the external memory, an SPRAM used to specific purposes other than cache, and an address generator generating virtual addresses for access to the DCACHE and the SPRAM. Because the SPRAM can load and store data by a pipeline of the LSU and exchanges data with an external memory through a DMA transfer, the present invention is especially available to high-speedily process a large amount of data such as the image data. Because the LSU can access the SPRAM with the same latency as that of the DCACHE, after data being stored in the external memory is transferred to the SPRAM, the processor can access the SPRAM in order to perform data process, and it is possible to process a large amount of data with shorter time than time necessary to directly access an external memory.
    Type: Grant
    Filed: February 8, 1999
    Date of Patent: May 14, 2002
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Michael Raam, Toru Utsumi, Takeki Osanai, Kamran Malik
  • Patent number: 6385694
    Abstract: A method of improving memory access for a computer system, by sending load requests to a lower level storage subsystem along with associated information pertaining to intended use of the requested information by the requesting processor, without using a high level load queue. Returning the requested information to the processor along with the associated use information allows the information to be placed immediately without using reload buffers. A register load bus separate from the cache load bus (and having a smaller granularity) is used to return the information. An upper level (L1) cache may then be imprecisely reloaded (the upper level cache can also be imprecisely reloaded with store instructions). The lower level (L2) cache can monitor L1 and L2 cache activity, which can be used to select a victim cache block in the L1 cache (based on the additional L2 information), or to select a victim cache block in the L2 cache (based on the additional L1 information).
    Type: Grant
    Filed: June 25, 1999
    Date of Patent: May 7, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steven Dodson, Guy Lynn Guthrie
  • Patent number: 6378050
    Abstract: An information processing apparatus is constructed to include a judging part for decoding an address of an input request and outputting a judgement signal which indicates whether the input request is a cache control request or a DMA control request, and a control part for carrying out a cache control when the judgement signal from the judging part indicates the cache control request, and carrying out a DMA control when the judgement signal indicates the DMA control request.
    Type: Grant
    Filed: April 9, 1999
    Date of Patent: April 23, 2002
    Assignee: Fujitsu Limited
    Inventors: Toru Tsuruta, Yuji Nomura
  • Patent number: 6370624
    Abstract: A page closing method and apparatus for multi-port host bridges. According to a method disclosed, a plurality of memory access commands are received from a plurality of command ports. A command is selected from one of the command ports to be the next memory access command executed. A number of pages of memory are closed in response to the command selected as the next memory access command. The number of pages closed is determined at least in part on which command port provides the next memory access command.
    Type: Grant
    Filed: November 13, 2000
    Date of Patent: April 9, 2002
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Michael W. Williams, Robert N. Murdoch
  • Publication number: 20020026562
    Abstract: A two-way cache system and operating method for interfacing with peripheral devices. The cache system is suitable for data transmission between a peripheral device and a memory unit and has a two-way first-in first-out buffer region and a two-way cache controller. The two-way first-in first-out buffer region further has a first cache data region and a second cache data region. The first cache data region and the second cache data region are capable of holding a batch of first cache data and a batch of second cache data. The two-way cache controller receives a read request signal from the peripheral device. According to the read request, the requested data and the data that comes after the requested data are retained by the two-way first-in first-out buffer region. If the peripheral device continues to request more data, the first cache data region and the second cache data region are alternately used to read in subsequent data.
    Type: Application
    Filed: June 15, 2001
    Publication date: February 28, 2002
    Inventors: Chau-Chad Tsai, Chen-Ping Yang, Chi-Che Tsai
  • Publication number: 20020019912
    Abstract: The conventional multi-port cache memory, which is formed by using multi-port cell blocks, is excellent in its operating speed. However, the integration area of the constituent multi-port cell blocks is increased in proportion to the square of the number of ports. Thus, if it is intended to decrease the cache miss probability by increasing the storage capacity, the chip size is increased correspondingly, which increases the manufacturing cost. On the other hand, the multi-port cache memory of the present invention is formed by using, as constituents, one-port cell blocks adapted for a large storage capacity, making it possible to easily provide a multi-port cache memory of a large storage capacity and reduced integration area, which has a large random access bandwidth, is capable of parallel access from a plurality of ports, and is adapted for use in advanced microprocessors having a small cache miss probability.
    Type: Application
    Filed: August 2, 2001
    Publication date: February 14, 2002
    Inventors: Hans Jurgen Mattausch, Koji Kishi, Nobuhiko Omori
  • Patent number: 6345335
    Abstract: A data processing system 2 is provided with a Harvard-type central processing unit 4 coupled to a first level memory 16. The first level memory 16 may be in the form of a cache memory. The first level memory 16 has a data access port and an instruction access port that support parallel data side and instruction side operations. A cache controller 62 may be provided to arbitrate between situations in which concurrent write operations to the same memory location are requested. A separate line fill port may be provided for cache line fills following a cache miss.
    Type: Grant
    Filed: September 13, 1999
    Date of Patent: February 5, 2002
    Assignee: Arm Limited
    Inventor: David Walter Flynn
  • Patent number: 6343348
    Abstract: A multi-ported register file is typically metal limited to the area consumed by the circuit proportional with the square of the number of ports. A processor having a register file structure divided into a plurality of separate and independent register files forms a layout structure with an improved layout efficiency. The read ports of the total register file structure are allocated among the separate and individual register files. Each of the separate and individual register files has write ports that correspond to the total number of write ports in the total register file structure. Writes are fully broadcast so that all of the separate and individual register files are coherent.
    Type: Grant
    Filed: December 3, 1998
    Date of Patent: January 29, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Marc Tremblay, William Joy
  • Patent number: 6336154
    Abstract: A computer system comprises: a processing system for processing data; a memory for storing data processed by, or to be processed by, the processing system; a memory access controller for controlling access to the memory; and at least one data buffer for buffering data to be written to or read from the memory. A burst controller is provided for issuing burst instructions to the memory access controller, and the memory access controller is responsive to such a burst instruction to transfer a plurality of data words between the memory and the data buffer in a single memory transaction. A burst instruction queue is provided so that such a burst instruction can be made available for execution by the memory access controller immediately after a preceding burst instruction has been executed.
    Type: Grant
    Filed: June 20, 2000
    Date of Patent: January 1, 2002
    Assignee: Hewlett-Packard Company
    Inventors: Dominic Paul McCarthy, Stuart Victor Quick
  • Publication number: 20010056520
    Abstract: A performance optimized RAID Level 3 storage access controller with a unique XOR engine placement at the host/network side of the cache. The invention utilizes multiple data communications channels and a centralized cache memory in conjunction with this unique XOR placement to maximize performance and fault tolerance between a host network and data storage. Positioning the XOR engine at the host/network side of the cache allows the storage devices to be fully independent. Since the XOR engine is placed in the data path and the parity is generated in real-time during cache write transfers, the bandwidth overhead is reduced to zero. For high performance RAID controller applications, a system architecture with minimal bandwidth overhead provides superior performance.
    Type: Application
    Filed: June 14, 2001
    Publication date: December 27, 2001
    Inventors: Lee McBryde, Gordon Manning, Dave Illar, Richard Williams, Michael Piszczek