Memory Access Pipelining Patents (Class 711/169)
-
Patent number: 6651134Abstract: An integrated circuit comprising a memory and a logic circuit. The memory may comprise a plurality of storage elements each configured to read and write data in response to an internal address signal. The logic circuit may be configured to generate a predetermined number of the internal address signals in response to (i) an external address signal, (ii) a clock signal and (iii) one or more control signals. The generation of the predetermined number of internal address signals may be non-interruptible.Type: GrantFiled: February 14, 2000Date of Patent: November 18, 2003Assignee: Cypress Semiconductor Corp.Inventor: Cathal G. Phelan
-
Patent number: 6647456Abstract: At memory controller system is provided including a plurality of memory controller subsystems each coupled between memory and one of a plurality of computer components. Each memory controller subsystem includes at least one queue for managing pages in the memory. In use, each memory controller subsystem is capable of being loaded from the associated computer component independent of the state of the memory. Since high bandwidth and low latency are conflicting requirements in high performance memory systems, the present invention separates references from various computer components into multiple command streams. Each stream thus can hide precharge and activate bank preparation commands within its own stream for maximum bandwidth.Type: GrantFiled: February 23, 2001Date of Patent: November 11, 2003Assignee: NVIDIA CorporationInventors: James M. Van Dyke, Nicholas J. Foskett, Brad Simeral, Sean Treichler
-
Patent number: 6647470Abstract: A system and method for decreasing the memory access time by determining if data will be written directly to the array or be posted through a data buffer on a per command basis is disclosed. A memory controller determines if data to be written to a memory array, such as a DRAM array, is either written directly to the array or posted through a data buffer on a per command basis. If the controller determines that a write command is going to be followed by another write command, the data associated with the first write command will be written directly into the memory array without posting the data in the buffer. If the controller determines that a write command will be followed by a read command, the data associated with the write command will be posted in the data buffer, allowing the read command to occur with minimal delay, and the posted data will then be written into the array when the internal I/O lines are no longer being used to execute the read command.Type: GrantFiled: August 21, 2000Date of Patent: November 11, 2003Assignee: Micron Technology, Inc.Inventor: Jeffrey W. Janzen
-
Patent number: 6647485Abstract: A high-performance, superscalar-based computer system with out-of-order instruction execution for enhanced resource utilization and performance throughput. The computer system fetches a plurality of fixed length instructions with a specified, sequential program order (in-order). The computer system includes an instruction execution unit including a register file, a plurality of functional units, and an instruction control unit for examining the instructions and scheduling the instructions for out-of-order execution by the functional units. The register file includes a set of temporary data registers that are utilized by the instruction execution control unit to receive data results generated by the functional units. The data results of each executed instruction are stored in the temporary data registers until all prior instructions have been executed, thereby retiring the executed instruction in-order.Type: GrantFiled: May 10, 2001Date of Patent: November 11, 2003Assignee: Seiko Epson CorporationInventors: Le Trong Nguyen, Derek J. Lentz, Yoshiyuki Miyayama, Sanjiv Garg, Yasuaki Hagiwara, Johannes Wang, Te-Li Lau, Sze-Shun Wang, Quang H. Trang
-
Patent number: 6647478Abstract: A semiconductor memory device. The device includes a bit line, a memory cell coupled to the bit line and a word line coupled to the memory cell. A first time between the receiving of a read command for a read operation in order to read data from the memory cell and the beginning of the read operation is different from a second time between the receiving of a write command for a write operation in order to write data to the memory cell and the beginning of the write operation.Type: GrantFiled: June 20, 2002Date of Patent: November 11, 2003Assignee: Kabushiki Kaisha ToshibaInventors: Kenji Tsuchida, Haruki Toda, Hitoshi Kuyama
-
Publication number: 20030208666Abstract: A memory controller which has multiple stages of pipelining. A request buffer is used to hold the memory request from the processor and peripheral devices. The request buffer comprises a set of rotational registers that holds the address, the type of transfer and the count for each request. The pipeline includes a decode stage, a memory address stage, and a data transfer stage. Each stage of the pipeline has a pointer to the request buffer. As each stage completes its processing, a state machine updates the pointer for each of the stages to reference a new memory request which needs to be processed.Type: ApplicationFiled: June 17, 2003Publication date: November 6, 2003Inventor: Joseph Jeddeloh
-
Publication number: 20030208665Abstract: A processor may use a cache hit/miss prediction table (CPT) to predict whether a load will hit or miss and use this information to schedule dependent instructions in the instruction pipeline. The CPT may be a Bloom filter which uses a portion of the load address to index the table.Type: ApplicationFiled: May 1, 2002Publication date: November 6, 2003Inventors: Jih-Kwon Peir, Konrad Lai
-
Patent number: 6643744Abstract: An audio system includes a memory storing audio data and an audio signal processor for processing the audio data. Addressing circuitry addresses the memory and a pre-fetch storage area stores data for a current address and for one or more following addresses to hide memory access latency during address changes of the addressing circuitry.Type: GrantFiled: August 23, 2000Date of Patent: November 4, 2003Assignee: Nintendo Co., Ltd.Inventor: Howard H. Cheng
-
Patent number: 6636956Abstract: The invention provides a method and system for memory management, in which at least some individual nodes in a hybrid trie are striped across a set of pipelined memories. Memory management is performed for a hybrid trie including both branch-search nodes and leaf-search nodes and maintained in a sequence of pipelined memories. The method provides for insertion and removal of data elements within the hybrid trie and for storing at least some of the nodes in stripes across a sequence of the memories. Memory management is performed for the leaf-search nodes, by selecting stripes from the possible subsequences of those memories, that are suited to pipelined operations performed on the memories. In a preferred embodiment, an invariant condition is maintained for families of those stripes, in which exactly one cell block is labeled “sparse” and that cell block is used in techniques for allocation and de-allocation of entries.Type: GrantFiled: July 6, 2001Date of Patent: October 21, 2003Assignee: Cypress Semiconductor Corp.Inventors: Srinivasan Venkatachary, Pankaj Gupta, Anand Rangarajan
-
Publication number: 20030196059Abstract: A memory controller for a high-performance memory system has a pipeline architecture for generating control commands which satisfy logical, timing, and physical constraints imposed on control commands by the memory system. The pipelined memory controller includes a bank state cache lookup for determining a memory bank state for a target memory bank to which a control command is addressed, and a hazard detector for determining when a memory bank does not have a proper memory bank state for receiving and processing the control command. The hazard detector stalls the operation of the control command until the memory bank is in a proper state for receiving and processing the control command. The memory controller also has a command sequencer which sequences control commands to satisfy logical constraints imposed by the memory system, and a timing coordinator to time the communication of the sequenced control commands to satisfy timing requirements imposed by the memory.Type: ApplicationFiled: May 27, 2003Publication date: October 16, 2003Applicant: Rambus Inc.Inventors: Ramprasad Satagopan, Richard M. Barth
-
Publication number: 20030196058Abstract: A memory system for operation with a processor, such as a digital signal processor, includes a high speed pipelined memory, a store buffer for holding store access requests from the processor, a load buffer for holding load access requests from the processor, and a memory control unit for processing access requests from the processor, from the store buffer and from the load buffer. The memory control unit may include prioritization logic for selecting access requests in accordance with a priority scheme and bank conflict logic for detecting and handling conflicts between access requests. The pipelined memory may be configured to output two load results per clock cycle at very high speed.Type: ApplicationFiled: April 11, 2002Publication date: October 16, 2003Inventors: Hebbalalu S. Ramagopal, Murali S. Chinnakonda, Thang M. Tran
-
Publication number: 20030196034Abstract: A memory circuit (14) having features specifically adapted to permit the memory circuit (14) to serve as a video frame memory is disclosed. The memory circuit (14) contains a dynamic random access memory array (24) with buffers (18, 20) on input and output data ports (22) thereof to permit asynchronous read, write and refresh accesses to the memory array (24). The memory circuit (14) is accessed both serially and randomly. An address generator (28) contains an address buffer register (36) which stores a random access address and an address sequencer (40) which provides a stream of addresses to the memory array (24). An initial address for the stream of addresses is the random access address stored in the address buffer register (36).Type: ApplicationFiled: May 30, 2003Publication date: October 16, 2003Inventors: Masashi Hashimoto, Gene A. Frantz, John Victor Moravec, Jean-Pierre Dolait
-
Patent number: 6633972Abstract: A system and method for substituting dynamic pipelines with static queues in a pipelined processor. The system and method are to provide a reduction in power consumption and clock distribution, as well as other advantages.Type: GrantFiled: June 7, 2001Date of Patent: October 14, 2003Assignee: Intel CorporationInventor: Victor Konrad
-
Publication number: 20030191919Abstract: A volume control system in a storage area network environment, which is capable of executing volume operation requests in multiple stages, and is also capable of canceling an erroneous volume operation even after the operation has been executed by the user. A volume operation request for executing run commands from the user is analyzed, and parameters for executing the volume operation request are created by a program that operates the storage apparatus. Each of the parameters is provided with an attribute indicating as to whether cancellation is possible or as to when the execution is to be performed. If cancellation of the run command is possible, a parameter for canceling the run command is created simultaneously. The run commands may be executed on the basis of the parameters after verifying by a simulation as to whether or not the various parameters created are correct. At this point, the user may select whether to execute the run commands promptly or in stages.Type: ApplicationFiled: November 21, 2002Publication date: October 9, 2003Applicant: HITACHI, LTD.Inventors: Yoshitaka Sato, Hiroshi Nojima, Nobuyuki Yamashita, Tatsundo Aoshima
-
Patent number: 6631444Abstract: Architecture for a cache fabricated on a die with a processor including a plurality of cache banks, each containing a plurality of storage cell subarrays, the cache banks being arranged in physical relationship to a central location on the die that provides a point for information transfer between the processor and the cache. A data path provides synchronous transmission of data to/from the cache banks such that data requested by the processor in a given clock cycle is received at the central location a predetermined number of clock cycles later regardless of which cache bank in the cache the data is stored.Type: GrantFiled: June 27, 2001Date of Patent: October 7, 2003Assignee: Intel CorporationInventors: Kenneth R. Smits, Bharat Bhushan, Mahadevamurty Nemani
-
Patent number: 6629226Abstract: An interface coupled to a multiqueue storage device and configured to interface the multiqueue storage device with one or more handshaking signals. The multiqueue storage device and the interface may be configured to transfer variable size data packets.Type: GrantFiled: December 8, 2000Date of Patent: September 30, 2003Assignee: Cypress Semiconductor Corp.Inventors: Somnath Paul, S. Babar Raza
-
Patent number: 6629223Abstract: An apparatus and method for using self-timing logic to make at least two accesses to a memory core in one clock cycle is disclosed. In one embodiment of the invention, a memory wrapper (28) incorporating self-timing logic (36) and a mux (32) is used to couple a single access memory core (30) to a memory interface unit (10). The memory interface unit (10) couples a central processing unit (12) to the memory wrapper (28). The self-timing architecture as applied to multi-access memory wrappers avoids the need for calibration. Moreover, the self-timing architecture provides for a full dissociation between the environment (what is clocked on the system clock) and the access to the core. A beneifical result of the invention is making access at the speed of the core while processing several access in one system clock cycle.Type: GrantFiled: October 1, 1999Date of Patent: September 30, 2003Assignee: Texas Instruments IncorporatedInventors: Jean-Marc Philippe Bachot, Eric Badi
-
Patent number: 6629160Abstract: The present invention provides a direct memory access controller used for carrying out a direct memory access transfer of data from a first memory to a second memory, wherein the direct memory access controller has a modulo address arithmetic unit for executing a modulo adjustment to transfer addresses of the first memory by computing a top transfer address of a next transfer datum which should be transferred following to a previously transferred datum which has been stored at a higher address than an address at which the next transfer datum is stored in the first memory, so that the direct memory access controller allows a continuous transfer of all of required data.Type: GrantFiled: June 13, 2000Date of Patent: September 30, 2003Assignee: NEC Electronics CorporationInventor: Yutaka Morita
-
Patent number: 6625707Abstract: Speculative memory commands are prepared for reduced latency. A system memory read request is sent for preparing a main memory read command and for performing a cache lookup. The main memory read command is prepared independent from the performance of the cache lookup.Type: GrantFiled: June 25, 2001Date of Patent: September 23, 2003Assignee: Intel CorporationInventor: David S. Bormann
-
Publication number: 20030177335Abstract: In a computer processor, multiple partially translated real addresses for a pipelined operation are compared with the real addresses of one or more other operations in the pipeline to detect an address conflict, without waiting for the address translation mechanism to fully translate the real address. Preferably, if a match is found, it is assumed that an address conflict exists, and the pipeline is stalled one or more cycles to maintain data integrity in the event of an actual address conflict. Preferably, the CPU has caches which are addressed using real addresses, and an N-way translation lookaside buffer (TLB) for determining the high-order portion of a real address. Each of the N real address portions in the TLB is compared with other operations in the pipeline, before determining which is the correct real address portion.Type: ApplicationFiled: March 14, 2002Publication date: September 18, 2003Applicant: International Business Machines CorporationInventor: David Arnold Luick
-
Publication number: 20030177326Abstract: In a computer processor, a low-order portion of a virtual address for a pipelined operation is compared directly with the corresponding low-order portions of addresses of operations below it in the pipeline to detect an address conflict, without first translating the address. Preferably, if a match is found, it is assumed that an address conflict exists, and the pipeline is stalled one or more cycles to maintain data integrity in the event of an actual address conflict. Preferably, the CPU has caches which are addressed using real addresses, and a translation lookaside buffer (TLB) for determining the high-order portion of a real address. The comparison of low-order address portions provides conflict detection before the TLB can translate a real address of an instruction.Type: ApplicationFiled: March 14, 2002Publication date: September 18, 2003Applicant: International Business Machines CorporationInventor: David Arnold Luick
-
Patent number: 6622217Abstract: The present invention relates generally to a protocol engine for use in a multiprocessor computer system. The protocol engine, which implements a cache coherence protocol, includes a clock signal generator for generating signals denoting interleaved even clock periods and odd clock periods, a memory transaction state array for storing entries, each denoting the state of a respective memory transaction, and processing logic. The memory transactions are divided into even and odd transactions whose states are stored in distinct sets of entries in the memory transaction state array. The processing logic has interleaving circuitry for processing during even clock periods the even memory transactions and for processing during odd clock periods the odd memory transactions.Type: GrantFiled: June 11, 2001Date of Patent: September 16, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Kourosh Gharachorloo, Luiz A. Barroso, Mosur K. Ravishankar, Robert J Stets, Jr., Andreas Nowatzyk
-
Patent number: 6622228Abstract: A method for processing multiple memory requests in a pipeline. Each memory request is processed in part by a plurality of stages. In a first stage, the memory request is decoded. In a second stage, the address information for the memory request is processed. In a third stage, the data for the memory request is transferred. A request buffer is used to hold each of the memory requests during the processing of each of the memory requests.Type: GrantFiled: August 16, 2001Date of Patent: September 16, 2003Assignee: Micron Technology, Inc.Inventor: Joseph Jeddeloh
-
Publication number: 20030172242Abstract: An arbitration circuit adjusts timings of a write request signal from a first external device and a read request signal from a second external device. An RAM performs data write/data read in response to the external write request/read request. A next-state function is provided, which has a function to calculate a write address/read address to be input to the RAM in response to the external write request/read request, and a function to accurately count data stored in a FIFO.Type: ApplicationFiled: March 3, 2003Publication date: September 11, 2003Inventors: Takuji Uneyama, Manabu Onozaki
-
Patent number: 6615333Abstract: A data processing device has a circuit for correcting an effect of executing memory access instructions out of order with respect to one another in a pipeline. A detector detects whether a same memory location is addressed by a first and second memory address used to access memory for a first and second memory access instruction that are processing at a predetermined relative distance in the pipeline respectively. A correction circuit modifies data handling in a pipeline stage processing the first memory access instruction when the detector signals the addressing of the same memory location and the first and/or second memory access instruction programs a command to compensate said effect of out of order execution of the first memory access instruction with respect to said second memory access instruction.Type: GrantFiled: May 3, 2000Date of Patent: September 2, 2003Assignee: Koninklijke Philips Electronics N.V.Inventors: Jan Hoogerbrugge, Alexander Augusteijn
-
Patent number: 6615326Abstract: Methods and structure in a memory controller for sequencing memory device page activation commands to improve memory bandwidth utilization. In a synchronous memory device such as SDRAM or DDR SDRAM, an “activate” command precedes a corresponding “read” or “write” command to ensure that the page or row to be accessed by the “read” or “write” is available (“open”) for access. Latency periods between the activation of the page and the readiness ofthe page for the corresponding read or write command are heretofore filled withnop commands. The present invention looks ahead for subsequent read and write commands and inserts activation commands (hidden activates) in nop command periods of the SDRAM device to prepare a page in another bank for a read or write operation to follow. This sequencing of activate commands overlaps the required latency with current read or write burst operations.Type: GrantFiled: November 9, 2001Date of Patent: September 2, 2003Assignee: LSI Logic CorporationInventor: Shuaibin Lin
-
Publication number: 20030163659Abstract: An electronic device for data processing may include p synchronous processor cores each respectively clocked by one of p clock signals all having a same period T and being phase-shifted by 2 &pgr;/p relative to one other. The electronic device may further include a single access shared memory with an access time less than or equal to T/p. The memory may be clocked by an access signal with a period T/p and that is synchronous with the clock signals. The processors cores may sequentially and cyclically access the memory at consecutive intervals spaced apart in time with a period equal to T/p. The electronic device is particularly well suited for use in audio processors of digital versatile disk (DVD) decoders, for example.Type: ApplicationFiled: February 20, 2003Publication date: August 28, 2003Applicant: STMicroelectronics S.A.Inventor: Stephane Audrain
-
Publication number: 20030163654Abstract: A method, system, and apparatus to schedule commands based on a status information of a plurality of memory banks.Type: ApplicationFiled: February 22, 2002Publication date: August 28, 2003Inventors: Eliel Louzoun, Israel Herscovich
-
Patent number: 6611885Abstract: A synchronous dynamic random access memory (“SDRAM”) operates with matching read and write latencies. To prevent data collision at the memory array, the SDRAM includes interim address and interim data registers that temporarily store write addresses and input data until an available interval is located where no read data or read addresses occupy the memory array. During the available interval, data is transferred from the interim data register to a location in the memory array identified by the address in the interim array register. In one embodiment, the SDRAM also includes address and compare logic to prevent reading incorrect data from an address to which the proper data has not yet been written. In another embodiment, a system controller monitors commands and addresses and inserts no operation commands to prevent such collision of data and addresses.Type: GrantFiled: October 9, 2001Date of Patent: August 26, 2003Assignee: Micron Technology, Inc.Inventors: Kevin J. Ryan, Terry R. Lee
-
Publication number: 20030159013Abstract: A method and system are shown for bypassing memory controller components when processing memory requests. A memory controller analyzes internal components to determine if any pending memory requests exist. If particular memory controller components are idle, a memory client is informed that a bypassing of memory controller components is possible. A bypass module of the memory controller receives memory requests from the memory client. The bypass module examines memory controller parameters and a configuration of main memory to determine which memory controller components may be bypassed and routes the memory request accordingly. In a system with asynchronous memory, the memory controller provides copies of the memory request through a dual pipeline. A first copy of the memory request is processed through a bypass module to attempt to bypass memory controller components. A second copy of the memory request is processed in a normal fashion in case a bypass of the memory access request is not possible.Type: ApplicationFiled: February 19, 2002Publication date: August 21, 2003Inventors: Michael Frank, Santiago Fernandez-Gomez, Robert W. Laker, Aki Niimura
-
Patent number: 6604180Abstract: A memory controller which has multiple stages of pipelining. A request buffer is used to hold the memory request from the processor and peripheral devices. The request buffer comprises a set of rotational registers that holds the address, the type of transfer and the count for each request. The pipeline includes a decode stage, a memory address stage, and a data transfer stage. Each stage of the pipeline has a pointer to the request buffer. As each stage completes its processing, a state machine updates the pointer for each of the stages to reference a new memory request which needs to be processed.Type: GrantFiled: July 11, 2002Date of Patent: August 5, 2003Assignee: Micron Technology, Inc.Inventor: Joseph Jeddeloh
-
Publication number: 20030145173Abstract: A method of parallel hardware-based multithreaded processing is described. The method includes assigning tasks for packet processing to programming engines and establishing pipelines between programming stages, which correspond to the programming engines. The method also includes establishing contexts for the assigned tasks on the programming engines and using a software controlled cache such as a CAM to transfer data between next neighbor registers residing in the programming engines.Type: ApplicationFiled: January 25, 2002Publication date: July 31, 2003Inventors: Hugh M. Wilkinson, Mark B. Rosenbluth, Matthew J. Adiletta, Debra Bernstein, Gilbert Wolrich
-
Patent number: 6601136Abstract: A media server system and process are disclosed that have device independent near-online storage support. A plurality of media assets are stored in online storage, and a plurality of media assets are stored on tertiary storage devices in tertiary storage to provide near-online storage. A media server, having access to the online storage and the tertiary storage, receives a user request for a media asset. The media server then determines whether the requested media asset needs to be loaded from the tertiary storage. If so, the media server allocates space in the online storage for the requested media asset. A transfer process specific to the tertiary storage devices is then used to transfer content of the requested media asset to the online storage.Type: GrantFiled: October 22, 2001Date of Patent: July 29, 2003Assignee: Kasenna, Inc.Inventors: Lakshminarayanan Gunaseelan, Eliahu Lauris
-
Patent number: 6591342Abstract: A memory disambiguation apparatus includes a store queue, a store forwarding buffer, and a version count buffer. The store queue includes an entry for each store instruction in the instruction window of a processor. Some store queue entries include resolved store addresses, and some do not. The store forwarding buffer is a set-associative buffer that has entries allocated for store instructions as store addresses are resolved. Each entry in the store forwarding buffer is allocated into a set determined in part by a subset of the store address. When the set in the store forwarding buffer is full, an older entry in the set is discarded in favor of the newly allocated entry. A version count buffer including an array of overflow indicators is maintained to track overflow occurrences. As load addresses are resolved for load instructions in the instruction window, the set-associative store forwarding buffer can be searched to provide memory disambiguation.Type: GrantFiled: December 14, 1999Date of Patent: July 8, 2003Assignee: Intel CorporationInventors: Haitham Akkary, Sebastien Hily
-
Patent number: 6591354Abstract: A memory system including a memory array, an input circuit and a logic circuit is presented. The input circuit is coupled to receive a memory address and a set of individual write controls for each byte of data word. During a write operation, the input circuit also receives the corresponding write data to be written into the SRAM. The logic circuit causes the write data and write control information to be stored in the input circuit for the duration of any sequential read operations immediately following the write operation and then to be read into memory during a subsequent write operation. During the read operation, data which is stored in the write data storage registers prior to being read into the memory can be read out from the memory system should the address of one or more read operations equal the address of the data to be written into the memory while temporarily stored in the write data storage registers.Type: GrantFiled: May 26, 1999Date of Patent: July 8, 2003Assignee: Integrated Device Technology, Inc.Inventors: John R. Mick, Mark W. Baumann
-
Publication number: 20030126385Abstract: A memory system including a non-volatile flash memory and a method for simultaneously selecting a plurality of memory blocks are disclosed. The memory system is organized into multiple main blocks each having multiple smaller blocks, emulating a disk drive. Control lines activate a number of modes. In a first mode, high-order address lines select only one block, while in a second mode, user-specified multiple blocks are selected. Blocks are selected by loading registers with selection bits or by using some of the address lines directly as selection bits. Each bit specifies one of the blocks, and each bit is independent of the others. The memory system also includes a predecoder and a controller which controls the predecoder and the registers so as to select at least two blocks of memory cells. In a third mode, all of the blocks are selected, and in a fourth mode, all blocks are deselected. Selecting multiple blocks allows simultaneous erasing, writing, and reading of multiple bytes stored in the memory.Type: ApplicationFiled: January 13, 2003Publication date: July 3, 2003Applicant: Micron Technology, Inc.Inventors: Vinod C. Lakhani, Christophe J. Chevallier, Mathew L. Adsitt
-
Patent number: 6587892Abstract: A memory device is described which is fabricated as an integrated circuit and uses distributed bond pads for electrical connection to an external conductive lead. The distributed bond pads are attached to a external lead, thereby eliminating bus lines on the integrated circuit memory. Distributed buffer circuits are described which can be included with the distributed bond pads to increase data communication time between the memory device and an external processor.Type: GrantFiled: January 22, 2001Date of Patent: July 1, 2003Assignee: Micron Technology, Inc.Inventor: Stephen L. Casper
-
Publication number: 20030120883Abstract: An electronic processing device has an integer pipeline and a load/store pipeline disposed in parallel to receive a series of instructions via a Fetch stage and a Predecode stage. If an instruction is stalled in a Decode stage of the integer pipeline, one or more Delay stages can be switched into and out of the integer pipeline between the Decode stage and the Predecode stage so as to increase or decrease its effective length. This allows the Predecode stage to continue to issue instructions and therefore the load/store pipeline does not need to stall. The maximum number of delay stages that need to be available for switching into the integer pipeline is the same as a load-use penalty for that pipeline.Type: ApplicationFiled: November 26, 2002Publication date: June 26, 2003Inventors: Glenn Ashley Farrall, Neil Stuart Hastie, Erik Karl Nordan
-
Publication number: 20030120882Abstract: A program memory controller unit includes apparatus for the execution of a software pipeline loop procedure in response to a predetermined instruction. The apparatus provides a prolog, a kernel, and an epilog state for the execution of the software pipeline procedure. In addition, in response to a predetermined condition, the software pipeline procedure can be terminated early. A second software procedure can be initiated prior to the completion of first software procedure. An SPEXIT instruction is provided to permit the software pipeline program to terminate upon the identification of a preselected condition. The SPEXIT instruction is placed in the instruction sequence to insure that response to the instruction occurs after the prolog procedure has been completed. The SPEXIT instruction, upon identification of the preselected condition, results in the software pipeline loop procedure entering an idle state.Type: ApplicationFiled: August 21, 2002Publication date: June 26, 2003Inventors: Elana D. Granston, Eric J. Stotzer, Steve D. Krueger, Timothy D. Anderson
-
Publication number: 20030115347Abstract: Common control for enqueue and dequeue operations in a pipelined network processor includes receiving in a queue manager a first enqueue or dequeue with respect to a queue and receiving a second enqueue or dequeue request in the queue manager with respect to the queue. Processing of the second request is commenced prior to completion of processing the first request.Type: ApplicationFiled: December 18, 2001Publication date: June 19, 2003Inventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein, Matthew J. Adiletta
-
Patent number: 6581150Abstract: An apparatus and method are provided for improving the speed at which a pipeline microprocessor accesses misaligned memory operands. The apparatus includes page boundary evaluation logic and address logic. The page boundary evaluation logic evaluates an address corresponding to the misaligned memory operand, and determines whether or not access to the misaligned memory operand is within a single memory page. The address logic is coupled to the page boundary evaluation logic. When access to the misaligned memory operand is within the single memory page, the address logic eliminates an access tickle instruction from an instruction sequence generated to access the misaligned memory operand.Type: GrantFiled: August 16, 2000Date of Patent: June 17, 2003Assignee: IP-First, LLCInventors: Gerard M. Col, Darius D. Gaskins, Terry Parks
-
Patent number: 6571325Abstract: A memory controller for a high-performance memory system has a pipeline architecture for generating control commands which satisfy logical, timing, and physical constraints imposed on control commands by the memory system. The pipelined memory controller includes a bank state cache lookup for determining a memory bank state for a target memory bank to which a control command is addressed, and a hazard detector for determining when a memory bank does not have a proper memory bank state for receiving and processing the control command. The hazard detector stalls the operation of the control command until the memory bank is in a proper state for receiving and processing the control command. The memory controller also has a command sequencer which sequences control commands to satisfy logical constraints imposed by the memory system, and a timing coordinator to time the communication of the sequenced control commands to satisfy timing requirements imposed by the memory system.Type: GrantFiled: September 23, 1999Date of Patent: May 27, 2003Assignee: Rambus Inc.Inventors: Ramprasad Satagopan, Richard M. Barth
-
Patent number: 6571320Abstract: The cache memory is particularly suitable for processing images. The special configuration of a memory field, an allocation unit, a write queue, and a data conflict recognition unit enable a number of data items to be read out from the memory field simultaneously per cycle, in the form of line or column segments. The format of the screen windows that are read out can change from one cycle to another. With sufficient data locality, time-consuming reloading operations do not damage the data throughput since the access requests are pipelined.Type: GrantFiled: November 7, 2000Date of Patent: May 27, 2003Assignee: Infineon Technologies AGInventor: Ulrich Hachmann
-
Patent number: 6571319Abstract: Instruction combining logic combines data from a plurality of write transactions before the data is written into main memory. In one embodiment, the instruction combining logic receives write transactions generated from store pair instructions, stores data from the write transactions in a buffer, and combines the data in the buffer. The combined data is subsequently written to memory in a single write transaction. The instruction combining logic may determine whether the data from the transactions are in the same cache line before combining them. A programmable timer may be used to measure the amount of time that has elapsed after the instruction combining logic receives the first write transaction. If the elapsed time exceeds a predetermined limit before another write instruction is received, the instruction combining logic combines the data in the buffer and writes it to memory in a single write transaction.Type: GrantFiled: June 4, 1999Date of Patent: May 27, 2003Assignee: Sun Microsystems, Inc.Inventors: Marc Tremblay, Shrinath Keskar
-
Publication number: 20030097535Abstract: A method and structure for implementing a DRAM memory array as a second level cache memory in a computer system. The computer system includes a central processing unit (CPU), a first level SRAM cache memory, a CPU bus coupled to the CPU, and a second level cache memory which includes a DRAM array coupled to the CPU bus. When accessing the DRAM array, row access and column decoding operations are performed in a self-timed asynchronous manner. Predetermined sequences of column select operations are then performed in a synchronous manner with respect to a clock signal. A widened data path is provided to the DRAM array, effectively increasing the data rate of the DRAM array. By operating the DRAM array at a higher data rate than the CPU bus, additional time is provided for precharging the DRAM array. As a result, the precharging of the DRAM array is transparent to the CPU bus. A structure and method control the refresh and internal operations of the DRAM array.Type: ApplicationFiled: December 23, 2002Publication date: May 22, 2003Inventors: Fu-Chieh Hsu, Wingyu Leung
-
Patent number: 6567901Abstract: A processor of a system initiates memory read transactions on a bus and provides information regarding the speculative nature of the transaction. A bus device, such as a memory controller, then receives and processes the transaction, placing the request in a queue to be serviced in an order dependent upon the relative speculative nature of the request. In addition, the processor, upon receipt of an appropriate signal, cancels a speculative read that is no longer needed or upgrades a speculative read that has become non-speculative.Type: GrantFiled: February 29, 2000Date of Patent: May 20, 2003Assignee: Hewlett Packard Development Company, L.P.Inventor: E. David Neufeld
-
Patent number: 6564285Abstract: A flash memory chip that can be switched into four different read modes is described. In asynchronous flash mode, the flash memory is read as a standard flash memory. In synchronous flash mode, a clock signal is provided to the flash chip and a series of addresses belonging to a data burst are specified, one address per clock period. The data stored at the specified addresses are output sequentially during subsequent clock periods. In asynchronous DRAM mode, the flash memory emulates DRAM. In synchronous DRAM mode the flash memory emulates synchronous DRAM.Type: GrantFiled: June 14, 2000Date of Patent: May 13, 2003Assignee: Intel CorporationInventors: Duane R. Mills, Brian Lyn Dipert, Sachidanandan Sambandan, Bruce McCormick, Richard D. Pashley
-
Patent number: 6564287Abstract: A semiconductor memory device is provided in which a burst length and/or a column address strobe (CAS) latency may be fixed. The semiconductor memory device, which may be an SDRAM (synchronous dynamic random access memory) device, includes a memory cell array, a burst address generation circuit to generate a burst address and a burst length detection signal, a mode setting register for setting a CAS latency and/or a burst length using an address, a pipeline circuit to delay and output data read from the memory cell array. The semiconductor memory device also includes a latency enable control signal generation circuit to generate a latency enable control signal in response to a read command or signal and the burst length detection signal, and a data output circuit to output data being output from the pipeline circuit in response to the latency enable control signal. Therefore, a circuit configuration is simplified and a test time is reduced, by fixing latency and/or burst length.Type: GrantFiled: September 5, 2000Date of Patent: May 13, 2003Assignee: Samsung Electronics Co., Ltd.Inventor: Jung-Bae Lee
-
Patent number: 6557086Abstract: A memory control system includes a frame memory divided into N image memories. Serial input image data are sequentially written onto the N image memories in rotation. Then, image data is concurrently read from each of the N image memories depending on a desired read position to produce N image data in parallel. The N image data are sorted to produce consecutive N image data in parallel.Type: GrantFiled: November 15, 2000Date of Patent: April 29, 2003Assignee: NEC Viewtechnology, LTDInventor: Youichi Tamura
-
Patent number: 6557052Abstract: A DMA transfer device has stream inputting means for receiving an encoded first stream; first stream storing means for storing the first stream; a main storage unit which stores the stream of said first stream storing means; first DMA transfer executing means,for executing a first DMA transfer from said first stream storing means to said main storage unit; first DMA transfer controlling means for controlling said first DMA transfer executing means on the basis of an amount of data which are stored in said first stream storing means or a free capacity; a processing unit which produces a second stream from the first stream that is read out from said main storage unit, and which writes the second stream into said main storage unit; second stream storing means for storing the second stream of said main storage unit; second DMA transfer executing means for executing a second DMA transfer from said main storage unit to said second stream storing means; and second DMA transfer controlling means for controlliType: GrantFiled: June 2, 2000Date of Patent: April 29, 2003Assignee: Matsushita Electric Industrial Co., Ltd.Inventor: Yasuhiro Kubo