Concurrent Accessing Patents (Class 711/168)
-
Patent number: 6631457Abstract: A surface computer includes an address generator for generating an address for adjusting surface region data concerning at least a storage region and a concurrent computer, provided at a subsequent stage of the address generator, having a plurality of unit computers.Type: GrantFiled: October 31, 2000Date of Patent: October 7, 2003Assignee: Sony Computer Entertainment Inc.Inventor: Akio Ohba
-
Publication number: 20030188088Abstract: A memory device includes a plurality of memory arrays, each memory array being coupled to an input data bus and an output data bus, a clock generator that generates an internal clock signal to form at least one transfer cycle to control timing of data transfer to and from the plurality of memory arrays, and a controller that controls read and write operations from and to the plurality of memory arrays. In one embodiment, the controller receives a command word containing at least a first command and a second command and executes the first and second command on the same transfer cycle.Type: ApplicationFiled: March 28, 2002Publication date: October 2, 2003Inventor: Lewis Stephen Kootstra
-
Patent number: 6625707Abstract: Speculative memory commands are prepared for reduced latency. A system memory read request is sent for preparing a main memory read command and for performing a cache lookup. The main memory read command is prepared independent from the performance of the cache lookup.Type: GrantFiled: June 25, 2001Date of Patent: September 23, 2003Assignee: Intel CorporationInventor: David S. Bormann
-
Patent number: 6625714Abstract: In a computer system, a parallel, distributed function lookaside buffer (TLB) includes a small, fast TLB and a second larger, but slower TLB. The two TLBs operate in parallel, with the small TLB receiving integer load data and the large TLB receiving other virtual address information. By distributing functions, such as load and store instructions, and integer and floating point instructions, between the two TLBs, the small TLB can operate with a low latency and avoid thrashing and similar problems while the larger TLB provides high bandwidth for memory intensive operations. This mechanism also provides a parallel store update and invalidation mechanism which is particularly useful for prevalidated cache tag designs.Type: GrantFiled: December 17, 1999Date of Patent: September 23, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventor: Terry L Lyon
-
Publication number: 20030177312Abstract: In an out-of-order execution computer system, a fast store forwarding buffer (FSFB) is conditionally signaled to output buffered store data of buffered memory store instructions to fill a buffered memory load instruction. The FSFB is coupled to a rotator so that the store data can be rotated from a first position to a second position. A control unit coupled with the FSFB determines whether or not to signal the FSFB to forward the store data. The control unit is also coupled with the rotator to signal the rotator whether and by how much to rotate the forwarded store data. The control unit is configured to detect a number of dependencies between a buffered memory load instruction and one or more buffered memory store instructions.Type: ApplicationFiled: March 15, 2002Publication date: September 18, 2003Inventors: Aravindh Baktha, Michael D. Upton, Thomas R. Huff
-
Patent number: 6622228Abstract: A method for processing multiple memory requests in a pipeline. Each memory request is processed in part by a plurality of stages. In a first stage, the memory request is decoded. In a second stage, the address information for the memory request is processed. In a third stage, the data for the memory request is transferred. A request buffer is used to hold each of the memory requests during the processing of each of the memory requests.Type: GrantFiled: August 16, 2001Date of Patent: September 16, 2003Assignee: Micron Technology, Inc.Inventor: Joseph Jeddeloh
-
Patent number: 6615328Abstract: Disk units operable under control of different disk control units hold the same data. Under circumstances in which data is duplexed, when data is duplexed again after data that is generally saved as backup data was accessed for reading and writing, in order to efficiently duplex data, update places of both original data and sub-data updated during the duplexing of data is interrupted are registered on an access information management table of an original disk control unit. When the original disk control unit receives a duplexing resume command, the duplexing of data may be efficiently reorganized by copying original data corresponding to an access place produced during the duplexing is interrupted from information of the access information management table to the backup data or by copying the backup data to the original data.Type: GrantFiled: October 22, 2001Date of Patent: September 2, 2003Assignee: Hitachi, Ltd.Inventors: Hitoshi Shiozawa, Koji Ozawa, Takahisa Kimura, Kazuhito Suishu, Kosaku Kambayashi, Katsunori Nakamura
-
Patent number: 6615326Abstract: Methods and structure in a memory controller for sequencing memory device page activation commands to improve memory bandwidth utilization. In a synchronous memory device such as SDRAM or DDR SDRAM, an “activate” command precedes a corresponding “read” or “write” command to ensure that the page or row to be accessed by the “read” or “write” is available (“open”) for access. Latency periods between the activation of the page and the readiness ofthe page for the corresponding read or write command are heretofore filled withnop commands. The present invention looks ahead for subsequent read and write commands and inserts activation commands (hidden activates) in nop command periods of the SDRAM device to prepare a page in another bank for a read or write operation to follow. This sequencing of activate commands overlaps the required latency with current read or write burst operations.Type: GrantFiled: November 9, 2001Date of Patent: September 2, 2003Assignee: LSI Logic CorporationInventor: Shuaibin Lin
-
Publication number: 20030163654Abstract: A method, system, and apparatus to schedule commands based on a status information of a plurality of memory banks.Type: ApplicationFiled: February 22, 2002Publication date: August 28, 2003Inventors: Eliel Louzoun, Israel Herscovich
-
Patent number: 6611894Abstract: The present invention relates to a data retrieval apparatus for retrieving the data from a number of places of data stored in memories which adopts binary search method and enables high-speed retrieval operation. The apparatus includes three memories and address converting circuits. A logical address space is divided into 2 banks of a bank constituting a set of even number addresses and a bank constituting a set of odd number addresses. Further, in the case where in respect of one bank of the 2 banks and addresses are expressed by binary numbers, the one bank is divided into a bank constituting a set of addresses where an even number of bits of “1” are present and a bank constituting a set of addresses where an odd number of bits of “1” are present. A total of the 3 banks of the logical address space are mapped in a physical address space of 3 memories. A control device carries out retrieval of data stored in the memories by binary search method by using given key data.Type: GrantFiled: March 25, 1999Date of Patent: August 26, 2003Assignee: Kawasaki Microelectronics, Inc.Inventor: Ryuichi Onoo
-
Patent number: 6611906Abstract: A hardware-based linked list queues memory transactions in a memory controller. The memory controller includes a plurality of memory controller agents. Each agent has a head flag, a tail flag, and a next agent field, thereby allowing the agents to be arranged into linked lists. Memory transactions are received from cacheable entities of a computer system at an incoming memory transaction dispatch unit via an interconnection fabric. The incoming transactions are then presented to the plurality of agents. For each incoming read transaction, one of the agents will accept the transaction. If there are pending memory read transactions for the memory line, then the accepting agent joins a linked list of agents that are queued up to access that memory line. The accepting agent drives its index out onto a bus that connects all agents. One agent in the linked list will have its tail flag set, and that agent will clear its tail flag and latch into its next agent field the index provided by the accepting agent.Type: GrantFiled: April 30, 2000Date of Patent: August 26, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Curtis R. McAllister, Robert C. Douglas
-
Patent number: 6609088Abstract: A formalized method for part of the design decisions, related to memory, involved while designing an essentially digital device is presented. The method shows how to traverse through and how to limit the search space being examined while solving these memory related design decisions. The method focuses on power consumption of said essentially digital device. A method for determining an optimized memory organization of an essentially digital device, wherein data reuse possibilities are explored, is described.Type: GrantFiled: July 23, 1999Date of Patent: August 19, 2003Assignee: Interuniversitaire Micro-Elektronica CentrumInventors: Sven Wuytack, Francky Catthoor, Hugo De Man, Jean-Philippe Diguet
-
Patent number: 6609174Abstract: Processing equipment with embedded MRAMs, and a method of fabricating, including a data processing device fabricated on a semiconductor chip with MRAM cells fabricated on the chip to form one to all of the memories on the chip. Also included is a dual bank memory in communication with the data processing device and circuitry coupled to the data processing device and the dual bank memory for providing simultaneous read access to the dual bank memory.Type: GrantFiled: October 19, 1999Date of Patent: August 19, 2003Assignee: Motorola, Inc.Inventor: Peter K. Naji
-
Publication number: 20030149856Abstract: A pointer structure on the storage unit of a non-volatile memory maintains a correspondence between the physical and logical address. The controller and storage unit transfer data on the basis of logical sector addresses with the conversion between the physical and logical addresses being performed on the storage unit. The pointer contains a correspondence between a logical sector address and the physical address of current data as well as maintaining one or more previous correspondences between the logical address and the physical addresses at which old data is stored. New and old data can be kept in parallel up to a certain point. When combined with background erase, performance is improved. In an exemplary embodiment, the pointer structure is one or more independent non-volatile sub-arrays, each with its own row decoder.Type: ApplicationFiled: February 6, 2002Publication date: August 7, 2003Applicant: Entire InterestInventor: Raul-Adrian Cernea
-
Patent number: 6604068Abstract: A system and method are described for concurrently modeling of any element of a geometric model. The geometric model is stored in a database as a number of model objects. The model objects are loaded into a computer, and are then used to generate a display. When a model is edited, a representation of the model object is changed. The change is sent to the database to update the corresponding model object stored in the database. Any computers displaying this model will be notified of the change and will reload and redisplay the updated model object.Type: GrantFiled: May 18, 2000Date of Patent: August 5, 2003Assignee: Cyra Technologies, Inc.Inventors: Richard William Bukowski, Mark Damon Wheeler
-
Patent number: 6604180Abstract: A memory controller which has multiple stages of pipelining. A request buffer is used to hold the memory request from the processor and peripheral devices. The request buffer comprises a set of rotational registers that holds the address, the type of transfer and the count for each request. The pipeline includes a decode stage, a memory address stage, and a data transfer stage. Each stage of the pipeline has a pointer to the request buffer. As each stage completes its processing, a state machine updates the pointer for each of the stages to reference a new memory request which needs to be processed.Type: GrantFiled: July 11, 2002Date of Patent: August 5, 2003Assignee: Micron Technology, Inc.Inventor: Joseph Jeddeloh
-
Publication number: 20030145159Abstract: A controller for a random access memory includes an address and command queue that holds memory references from a plurality of microcontrol functional units. The address and command queue includes a read queue that stores read memory references. The controller also includes a first read/write queue that holds memory references from a core processor and control logic including an arbiter that detects the fullness of each of the queues and a status of completion of outstanding memory references to select a memory reference from one of the queues.Type: ApplicationFiled: July 30, 2002Publication date: July 31, 2003Applicant: Intel Corporation, a Delaware corporationInventors: Matthew J. Adiletta, William Wheeler, James Redfield, Daniel Cutter, Gilbert Wolrich
-
Patent number: 6598140Abstract: A memory controller has separate memory controller agents that process memory transactions in parallel. A memory controller in accordance with the present invention includes a plurality of memory controller agents, which are coupled to each other via a series of busses, an incoming memory transaction dispatch unit, and an outgoing memory dispatch unit. Memory transactions are received from cacheable entities of a computer system at the incoming memory transaction dispatch unit, and are then presented to the plurality of agents. For each incoming transaction, one of the agents will accept the transaction. Each agent is responsible for ensuring coherency and fulfilling memory transactions for a single memory line. If multiple memory read transactions are received for a single memory line, the agents will configure themselves into a linked list to queue up the requests.Type: GrantFiled: April 30, 2000Date of Patent: July 22, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Curtis R. McAllister, Robert C. Douglas
-
Patent number: 6591348Abstract: A distributed system structure for a large-way, symmetric multiprocessor system using a bus-based cache-coherence protocol is provided. The distributed system structure contains an address switch, multiple memory subsystems, and multiple master devices, either processors, I/O agents, or coherent memory adapters, organized into a set of nodes supported by a node controller. The node controller receives commands from a master device, communicates with a master device as another master device or as a slave device, and queues commands received from a master device. The node controller has a deterministic delay between latching a snooped command broadcast by the address switch and presenting the command to the master devices on the node controller's master device buses.Type: GrantFiled: September 9, 1999Date of Patent: July 8, 2003Assignee: International Business Machines CorporationInventors: Sanjay Raghunath Deshpande, Tina Shui Wan Chan
-
Patent number: 6591354Abstract: A memory system including a memory array, an input circuit and a logic circuit is presented. The input circuit is coupled to receive a memory address and a set of individual write controls for each byte of data word. During a write operation, the input circuit also receives the corresponding write data to be written into the SRAM. The logic circuit causes the write data and write control information to be stored in the input circuit for the duration of any sequential read operations immediately following the write operation and then to be read into memory during a subsequent write operation. During the read operation, data which is stored in the write data storage registers prior to being read into the memory can be read out from the memory system should the address of one or more read operations equal the address of the data to be written into the memory while temporarily stored in the write data storage registers.Type: GrantFiled: May 26, 1999Date of Patent: July 8, 2003Assignee: Integrated Device Technology, Inc.Inventors: John R. Mick, Mark W. Baumann
-
Patent number: 6591345Abstract: A system and method is disclosed that reduces intrabank conflicts and ensures maximum bandwidth on accesses to strided vectors in a bank-interleaved cache memory. The computer system contains a processor including a vector execution unit, scalar processor unit, cache controller and bank-interleaved cache memory. The vector execution unit retrieves strided vectors of data and instructions stored in the bank-interleaved cache memory in a plurality of cache banks such that intrabank conflicts are reduced. Given a stride S of a vector, the strided vectors of data and instructions stored in the bank-interleaved cache memory are retrieved by determining R and T using the equation S=2T*R. If T<=W, W defining a cache bank 2W words wide, then, for 0<=i<2(W−T), 0<=j<2P, and 0<=k<2N, words addressed i+2(W−T+N)j+2(W−T)k are accessed on the same cycle.Type: GrantFiled: November 28, 2000Date of Patent: July 8, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventor: Andre C. Seznec
-
Patent number: 6591321Abstract: A multiprocessor system bus protocol system and method for processing and handling a processor request within a multiprocessor system having a number of bus accessible memory devices that are snooping on. at least one bus line. Snoop response groups which are groups of different types of snoop responses from the bus accessible memory devices are provided. Different transfer types are provided within each of the snoop response groups. A bus master device that provides a bus master signal is designated. The bus master device receives the processor request. One of the snoop response groups and one of the transfer types are appropriately designated based on the processor request. The bus master signal is formulated from a snoop response group, a transfer type, a valid request signal, and a cache line address. The bus master signal is sent to all of the bus accessible memory devices on the cache bus line and to a combined response logic system.Type: GrantFiled: November 9, 1999Date of Patent: July 8, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, James Stephen Fields, Jr., Guy Lynn Guthrie, Jody Bern Joyner, Jerry Don Lewis
-
Patent number: 6587917Abstract: A memory architecture for supporting mixed-mode memory accesses. A common row address is provided. A first column address for accessing a first column and a second column address for accessing a second column are provided. A first write control signal for specifying one of a write access and a read access for the first column, and a second write control signal for specifying one of a write access and a read access for the second column are also provided. The memory architecture, responsive to these input signals, supports concurrent mixed-mode memory accesses to the first column and a write access to the second column.Type: GrantFiled: May 29, 2001Date of Patent: July 1, 2003Assignee: Agilent Technologies, Inc.Inventor: Laura Elisabeth Simmons
-
Patent number: 6584553Abstract: Data to be programmed into memory-containing ICs is divided into a number (X) of blocks preferably equal to the number of sockets on a common programmer unit. A pick-up head inserts an unprogrammed IC into the first socket and the IC is programmed with a first data block A. While programming is occurring the pick-up head fetches an unprogrammed IC and inserts it into a second socket, whereupon both ICs are simultaneously programmed with the second data block B. During this time an unprogrammed IC is fetched and inserted into a third socket, whereupon all socketed ICs are simultaneously programmed with a third data block C. Eventually the first IC is fully programmed and is replaced with an unprogrammed IC and the cycle continues until all ICs to be programmed have been programmed. Multiple pick-up heads and/or multi-socketed programming units can be used.Type: GrantFiled: July 30, 2001Date of Patent: June 24, 2003Assignee: Exatron, Inc.Inventor: Robert L. Howell
-
Patent number: 6584546Abstract: A method of operating a cache memory includes the step of storing a set of data in a first space in a cache memory, a set of data associated with a set of tags. A subset of the set of data is stored in a second space in the cache memory, the subset of the set of data associated with a tag of a subset of the set of tags. The tag portion of an address is compared with the subset of data in the second space in the cache memory in that said subset of data is read when the tag portion of the address and the tag associated with the subset of data match. The tag portion of the address is compared with the set of tags associated with the set of data in the first space in cache memory and the set of data in the first space is read when the tag portion of the address matches one of the sets of tags associated with the set of data in the first space and the tag portion of the address and the tag associated with the subset of data in the second space do not match.Type: GrantFiled: January 16, 2001Date of Patent: June 24, 2003Inventor: Gautam Nag Kavipurapu
-
Publication number: 20030110364Abstract: A FIFO memory receives data transfer requests before data is stored in the FIFO memory. Multiple concurrent data transfers, delivered to the FIFO memory as interleaved multiple concurrent transactions, can be accommodated by the FIFO memory (i.e., multiplexing between different sources that transmit in distributed bursts). The transfer length requirements associated with the ongoing data transfers are tracked, along with the total available space in the FIFO memory. A programmable buffer zone also can be included in the FIFO memory for additional overflow protection and/or to enable dynamic sizing of FIFO depth.Type: ApplicationFiled: April 10, 2002Publication date: June 12, 2003Inventors: John Tang, Jean Xue, Karl M. Henson
-
Publication number: 20030110363Abstract: An apparatus and method for using self-timing logic to make at least two accesses to a memory core in one clock cycle is disclosed. In one embodiment of the invention, a memory wrapper (28) incorporating self-timing logic (36) and a mux (32) is used to couple a single access memory core (30) to a memory interface unit (10). The memory interface unit (10) couples a central processing unit (12) to the memory wrapper (28). The self-timing architecture as applied to multi-access memory wrappers avoids the need for calibration. Moreover, the self-timing architecture provides for a full dissociation between the environment (what is clocked on the system clock) and the access to the core. A beneifical result of the invention is making access at the speed of the core while processing several access in one system clock cycle.Type: ApplicationFiled: October 1, 1999Publication date: June 12, 2003Inventors: JEAN-MARC BACHOT, ERIC BADI
-
Patent number: 6578110Abstract: The invention is aimed at providing a high-speed processor system capable of performing distributed concurrent processing without requiring modification of conventional programming styles utilizing a high speed process or system and cache memories with processing capabilities. The processor system in accordance with the invention has a CPU, a plurality of parallel DRAMs, and a plurality of cache memories arranged in a hierarchical configuration. Each of the cache memories is provided with an MPU which is binarily-compatible with the CPU and which has a function to serve as a processor with, amongst other features, both a cache logic function and a processor-function.Type: GrantFiled: January 20, 2000Date of Patent: June 10, 2003Assignee: Sony Computer Entertainment, Inc.Inventor: Akio Ohba
-
Patent number: 6574719Abstract: An apparatus for providing concurrent communications between multiple memory devices and a processor is disclosed. Each of the memory device includes a driver, a phase/cycle adjust sensing circuit, and a bus alignment communication logic. Each phase/cycle adjust sensing circuit detects an occurrence of a cycle adjustment from a corresponding driver within a memory device. If an occurrence of a cycle adjustment has been detected, the bus alignment communication logic communicates the occurrence of a cycle adjustment to the processor. The bus alignment communication logic also communicates the occurrence of a cycle adjustment to the bus alignment communication logic in the other memory devices. There are multiple receivers within the processor, and each of the receivers is designed to receive data from a respective driver in a memory device. Each of the receivers includes a cycle delay block.Type: GrantFiled: July 12, 2001Date of Patent: June 3, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, James Stephen Fields, Jr., Sanjeev Ghai, Praveen S. Reddy, William John Starke
-
Patent number: 6571320Abstract: The cache memory is particularly suitable for processing images. The special configuration of a memory field, an allocation unit, a write queue, and a data conflict recognition unit enable a number of data items to be read out from the memory field simultaneously per cycle, in the form of line or column segments. The format of the screen windows that are read out can change from one cycle to another. With sufficient data locality, time-consuming reloading operations do not damage the data throughput since the access requests are pipelined.Type: GrantFiled: November 7, 2000Date of Patent: May 27, 2003Assignee: Infineon Technologies AGInventor: Ulrich Hachmann
-
Patent number: 6567903Abstract: An addressable memory having: a buffer memory adapted for coupling to a bus; a random access memory coupled to the buffer memory; an internal clock; and, a logic network, coupled to the bus and configured to transferring data among the buffer memory, the random access memory and the bus in response to clock signals produced by the internal clock and clock pulses provided on the bus. In a preferred embodiment, the buffer memory includes a first-in/first out (FIFO). A data storage system wherein a main frame computer section having main frame processors for processing data is coupled to a bank of disk drives through an interface. The interface includes: a bus; a controller; and, an addressable memory. The controller and addressable memories are interconnected through the bus. The addressable memory includes a master memory unit and a slave memory unit.Type: GrantFiled: August 23, 1996Date of Patent: May 20, 2003Assignee: EMC CorporationInventors: John K. Walton, Eli Leshem
-
Patent number: 6567556Abstract: A method and device decode compressed images, and in particular, images compressed according to the MPEG standards, and especially bidirectional images. The time period during which the memory is accessed by the decoder is minimized by extracting a predictor macroblock of a size greater than or equal to that of the macroblocks of the image stored in memory, from the memory. This extraction comprises accessing pages of the memory so as to simultaneously open with each page access, two pages situated respectively in two memory banks and respectively containing two macroblocks belonging respectively to two consecutive rows of macroblocks and to the same column of macroblocks of the stored image. A columnwise reading of some of the pixels of the two macroblocks accessed during the page access is then performed, so as to obtain some of the corresponding pixels of the predictor macroblock.Type: GrantFiled: May 18, 1999Date of Patent: May 20, 2003Assignee: STMicroelectronics SAInventor: Richard Bramley
-
Patent number: 6567901Abstract: A processor of a system initiates memory read transactions on a bus and provides information regarding the speculative nature of the transaction. A bus device, such as a memory controller, then receives and processes the transaction, placing the request in a queue to be serviced in an order dependent upon the relative speculative nature of the request. In addition, the processor, upon receipt of an appropriate signal, cancels a speculative read that is no longer needed or upgrades a speculative read that has become non-speculative.Type: GrantFiled: February 29, 2000Date of Patent: May 20, 2003Assignee: Hewlett Packard Development Company, L.P.Inventor: E. David Neufeld
-
Patent number: 6564309Abstract: The present invention relates to a processor including at least one memory access unit for presenting a read or write address over an address bus of a memory in response to the execution of a read or write instruction; and an arithmetic and logic unit operating in parallel with the memory access unit and arranged at least to present data on the data bus of the memory while the memory access unit presents a write address. The processor includes a write address queue in which is stored each write address provided by the memory access unit waiting for the availability of the data to be written.Type: GrantFiled: April 6, 1999Date of Patent: May 13, 2003Assignee: STMicroelectronics S.A.Inventor: Didier Fuin
-
Patent number: 6564285Abstract: A flash memory chip that can be switched into four different read modes is described. In asynchronous flash mode, the flash memory is read as a standard flash memory. In synchronous flash mode, a clock signal is provided to the flash chip and a series of addresses belonging to a data burst are specified, one address per clock period. The data stored at the specified addresses are output sequentially during subsequent clock periods. In asynchronous DRAM mode, the flash memory emulates DRAM. In synchronous DRAM mode the flash memory emulates synchronous DRAM.Type: GrantFiled: June 14, 2000Date of Patent: May 13, 2003Assignee: Intel CorporationInventors: Duane R. Mills, Brian Lyn Dipert, Sachidanandan Sambandan, Bruce McCormick, Richard D. Pashley
-
Patent number: 6557086Abstract: A memory control system includes a frame memory divided into N image memories. Serial input image data are sequentially written onto the N image memories in rotation. Then, image data is concurrently read from each of the N image memories depending on a desired read position to produce N image data in parallel. The N image data are sorted to produce consecutive N image data in parallel.Type: GrantFiled: November 15, 2000Date of Patent: April 29, 2003Assignee: NEC Viewtechnology, LTDInventor: Youichi Tamura
-
Patent number: 6557090Abstract: Column addresses are generated by a burst controller that includes respective latches for the three low-order bits of a column address. The two higher order bits of the latched address bits and their complements are applied to respective first multiplexers along with respective bits from a burst counter. The first multiplexers apply the latched address bits and their complements to respective second multiplexers during a first bit of a burst access, and bits from a burst counter during the remaining bits of the burst. The second multiplexers are operable responsive to a control signal to couple either the latched address bits or their complements to respective outputs for use as an internal address. The control signal is generated by an adder logic circuit that receives the two low-order bits of the column address.Type: GrantFiled: March 9, 2001Date of Patent: April 29, 2003Assignee: Micron Technology, Inc.Inventor: Duc V. Ho
-
Patent number: 6557085Abstract: The circuit handles access contentions in memories with a plurality of mutually independent, addressable I/O ports. There are provided two subcircuits, namely the so-called contention identification circuit and the so-called access inhibit circuit. The contention identification circuit identifies an access contention between two or more ports and generates a status signal. This status signal is communicated to the contention inhibit circuit. The contention inhibit circuit allocates a priority to each of the ports which are involved in the access contention. Based on the prioritization, the highest prioritized port is enabled, while the remaining ports are inhibited (temporarily disabled). The prioritization proceeds according to a predetermined algorithm. Two specific prioritization algorithms are given, namely a simple so-called PIH algorithm, in which the ports are hierarchically designated and a so-called “fair” IPIH algorithm.Type: GrantFiled: September 16, 1998Date of Patent: April 29, 2003Assignee: Infineon Technologies AGInventor: Hans-Jürgen Mattausch
-
Patent number: 6553450Abstract: Providing electrical isolation between the chipset and the memory data is disclosed. The disclosure includes providing at least one buffer in a memory interface between a chipset and memory modules. Each memory module includes a plurality of memory ranks. The at least one buffer allows the memory interface to be split into first and second sub-interfaces. The first sub-interface is between the chipset and the buffer. The second sub-interface is between the buffer and the memory modules. The method also includes interleaving output of the memory ranks in the memory modules, and configuring the at least one buffer to properly latch data being transferred between the chipset and the memory modules. The first and second sub-interfaces operate independently but in synchronization with each other.Type: GrantFiled: September 18, 2000Date of Patent: April 22, 2003Assignee: Intel CorporationInventors: Jim M. Dodd, Michael W. Williams, John B. Halbert, Randy M. Bonella, Chung Lam
-
Patent number: 6553449Abstract: A system and method for providing concurrent column and row operations in a memory system is provided. The memory system includes a memory controller, a plurality of memory devices, and communication paths between the memory controller and the plurality of memory devices. The memory controller is coupled to each memory device through a communication path that provides a column chip select signal to the memory device and a communication path that provides a row chip select signal to the memory device. The dual chip select signals allow a column operation to be carried out in the memory device simultaneously with a row operation in the memory device.Type: GrantFiled: September 29, 2000Date of Patent: April 22, 2003Assignee: Intel CorporationInventors: James M. Dodd, Michael W. Williams
-
Patent number: 6549994Abstract: The present invention relates to a semiconductor memory device capable of performing a write operation 1 or 2 cycles after receiving a write command without necessitating a dead cycle. The elimination of the dead cycle between read and write operations improves bus efficiency and thus, speed. The memory device of the present invention includes an address input control means for receiving an external write or read address and delaying the write address by either 1 or 2 cycles. A data input control means receives external write data and delays the write data by a first or second predetermined number of cycles according to the write mode. A data transmission control means transmits the delayed write data responsive to a predetermined set of input commands. The data input control means reads the data from a cell corresponding to the read address, provides the write data to a cell corresponding to the write address, and writes the transmitted delayed data into the cell corresponding to the write address.Type: GrantFiled: July 1, 1999Date of Patent: April 15, 2003Assignee: Samsung Electronics Co., Ltd.Inventor: Yong-Hwan Noh
-
Patent number: 6546470Abstract: A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized.Type: GrantFiled: March 12, 2001Date of Patent: April 8, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, James Stephen Fields, Jr., Sanjeev Ghai, Guy Lynn Guthrie, Jody B. Joyner
-
Patent number: 6546469Abstract: A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized.Type: GrantFiled: March 12, 2001Date of Patent: April 8, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, James Stephen Fields, Jr., Sanjeev Ghai, Guy Lynn Guthrie, Jody B. Joyner
-
Patent number: 6546468Abstract: A multiprocessor computer system in which snoop operations of the caches are synchronized to allow the issuance of a cache operation during a cycle which is selected based on the particular manner in which the caches have been synchronized. Each cache controller is aware of when these synchronized snoop tenures occur, and can target these cycles for certain types of requests that are sensitive to snooper retries, such as kill-type operations. The synchronization may set up a priority scheme for systems with multiple interconnect buses, or may synchronize the refresh cycles of the DRAM memory of the snooper's directory. In another aspect of the invention, windows are created during which a directory will not receive write operations (i.e., the directory is reserved for only read-type operations). The invention may be implemented in a cache hierarchy which provides memory arranged in banks, the banks being similarly synchronized.Type: GrantFiled: December 27, 2000Date of Patent: April 8, 2003Assignee: International Business Machines CorporationInventors: Ravi Kumar Arimilli, James Stephen Fields, Jr., Sanjeev Ghai, Guy Lynn Guthrie, Jody B. Joyner
-
Patent number: 6539440Abstract: According to the present invention, a method for very fast calculation of the earliest command issue time for a new command issued by a memory controller is disclosed. The memory controller includes N page status registers each of which includes four page timers such that each of the page timers store a period of time between a last issued command to the particular page and a predicted next access to the memory, wherein the next access to the same page can be “close”, “open”, “write” or “read”. An incoming new command is received and it is then determined how long a particularly page access has to wait before the issue. An appropriate contents of a command timing lookup table is selected by the new command. A new time value is written into appropriate page timers that has to be inserted between the new command and a possible next access to the same page.Type: GrantFiled: November 12, 1999Date of Patent: March 25, 2003Assignee: Infineon AGInventors: Henry Stracovsky, Piotr Szabelski
-
Patent number: 6532523Abstract: Apparatus for processing memory access requests includes first and second state machines for controlling access to first and second memory banks and an arbiter. While the first state machine is processing a current memory access request for the first memory bank, the arbiter recieves a next memory access and determines wheather the next memory access request will interfere with the processing of the current memory access request. If no interference will occur, and if the next access request is directed to the second memory bank, the second state machine begins processing he next memory access request before completion of processing of the current memory access request. The second state machine begins processing of the next memory access request during a mandatory wait period implemented by the first state machine. The first and second state machines process the current and next memory access request concurrently.Type: GrantFiled: October 13, 1999Date of Patent: March 11, 2003Assignee: Oak Technology, Inc.Inventor: Ramesh Mogili
-
Patent number: 6532524Abstract: An apparatus comprising a first compare circuit, a second compare circuit and a memory. The first compare circuit may be configured to present a first match signal in response to a first address and a second address. The second compare circuit may be configured to present a second match signal in response to the first match signal, a first write enable signal and a second write enable signal. The memory may also be configured to present the first and second write enable signals. In one example, the memory may be configured to store and retrieve data with zero waiting cycles in response to the second match signal.Type: GrantFiled: March 30, 2000Date of Patent: March 11, 2003Assignee: Cypress Semiconductor Corp.Inventors: Junfei Fan, Jeffery Scott Hunt
-
Patent number: 6532525Abstract: A specific embodiment is disclosed for a method and apparatus for processing data access requests from a requesting device, such as a graphics processor device. Data access commands are provided at a first rate, for example 200M commands per second, to a memory bridge. In response to receiving the access requests the memory bridge will provide its own access requests to a plurality of memories at approximately the first rate. In response to the memory bridge requests, the plurality of memories will access a plurality of data at a second data rate. When the data access between the memory bridge and the memories is a read request, data is returned to the requesting device at a third data rate which is greater than the first data rate by approximately four-times or more. Noise and power reduction techniques can be used on the data bus between the accessing device and the data bridge.Type: GrantFiled: September 29, 2000Date of Patent: March 11, 2003Assignee: ATI Technologies, Inc.Inventors: Milivoje Aleksic, Grigory Temkine, Oleg Drapkin, Carl Mizuyabu, Adrian Hartog
-
Patent number: 6529968Abstract: In a computer system, an agent, a DMA controller and a memory controller are provided, each in communication with a bus. The DMA controller and the memory controller also can communicate with each other via a second communication path. The computer system may include a memory provided in communication with the memory controller having a coherent memory space and a non-coherent memory space. The DMA controller transfers a portion of data from the coherent memory space with a portion of data from the non-coherent memory space with a single transaction on the external bus.Type: GrantFiled: December 21, 1999Date of Patent: March 4, 2003Assignee: Intel CorporationInventor: Andrew V. Anderson
-
Publication number: 20030037209Abstract: A memory engine combines associative memory and random-access memory for enabling fast string search, insertion, and deletion operations to be performed on data and includes a memory device for temporarily storing the data as a string of data characters. A controller is utilized for selectively outputting one of a plurality of commands to the memory device and receives data feedback therefrom, the memory device inspects data characters in the string in accordance with the commands outputted by the controller. A clock device is also utilized for outputting a clock signal comprised of a predetermined number of clock cycles per second to the memory device and the controller, the memory device inspecting and selectively manipulating one of the data characters within one of the clock cycles.Type: ApplicationFiled: August 10, 2001Publication date: February 20, 2003Inventors: Gheorghe Stefan, Dominique Thiebaut