Memory Access Pipelining Patents (Class 711/169)
-
Patent number: 6772278Abstract: A semiconductor device comprises an internal circuit and a clock signal switching unit group having M clock signal switching units. m first clock signals and n control signals are input to the clock signal switching unit group (M is not less than m, and M is not less than n). Each of the M clock signal switching units, to which one of the m first clock signals, and an output clock signal from another clock signal switching unit are input, selects one of the m first clock signal and a signal obtained by delaying the output clock signal from another clock signal switching unit, based on the n control signals, and outputs the selected one of the m first clock signal and the signal as an output clock signal. The output clock signal controls the internal circuit.Type: GrantFiled: December 12, 2001Date of Patent: August 3, 2004Assignee: Kabushiki Kaisha ToshibaInventor: Haruki Toda
-
Patent number: 6769051Abstract: A comparison circuit compares a burst access request from a bus controller with an access mode that is supported by an external memory device and that is set in a device information setting register. When the burst access request does not match the access mode that is supported by the external memory device, a control signal generation circuit accesses the external memory device in an access mode different from that of the burst access request from the bus controller. At this time, the control signal generation circuit controls the external memory device so that the data is output to the bus controller in the order corresponding to the burst access request from the bus controller. This enables the bus controller to read the correct data according to the burst access request.Type: GrantFiled: April 29, 2002Date of Patent: July 27, 2004Assignee: Renesas Technology Corp.Inventor: Maiko Taruki
-
Patent number: 6769047Abstract: A method and system for maximizing DRAM memory bandwidth is provided. The system includes a plurality of buffers to store a plurality of data units, a selector coupled to the buffers to select the buffer to which a data unit is to be stored, and logic coupled to the buffers to schedule an access of one of a corresponding number of memory banks based on the buffer in which the data unit is stored. The system receives a data unit, computes an index based on at least a portion of the data unit, selects a buffer in which to store the data unit based on the index, stores the data unit in the selected buffer, schedules a memory bank access based on the index, reads the data unit from the selected buffer, and accesses the memory bank.Type: GrantFiled: March 21, 2002Date of Patent: July 27, 2004Assignee: Intel CorporationInventor: Sreenath Kurupati
-
Patent number: 6769049Abstract: A computer memory access controller receives load and store requests from a plurality of parallel execution pipelines and forms queues of store and load addresses. A comparator compares load addresses with store addresses in a store address queue and selects a store before load if an address match is found, but selects a load before a store if no address match is found.Type: GrantFiled: May 2, 2000Date of Patent: July 27, 2004Assignee: STMicroelectronics S.A.Inventors: Bruno Bernard, Nicolas Grossier, Ahmed Dabbagh
-
Patent number: 6766385Abstract: The present invention includes a method and device for controlling the data length of read and write operations performed on a memory device. The method includes determining a first number of channels available to a memory controller operatively coupled to the memory device; determining a second number representative of the number of populated channels; calculating a burst length based on the first and second numbers; and programming the memory controller to use the burst length as the data length of read and write operations performed on the memory device.Type: GrantFiled: January 7, 2002Date of Patent: July 20, 2004Assignee: Intel CorporationInventors: James M. Dodd, Brian P. Johnson, Jay C. Wells, John B. Halbert
-
Publication number: 20040128462Abstract: A multiple-stage pipeline for transaction conversion is disclosed. A method is disclosed that converts a transaction into a set of concurrently performable actions. In a first pipeline stage, the transaction is decoded into an internal protocol evaluation (PE) command, such as by utilizing a look-up table (LUT). In a second pipeline stage, an entry within a PE random access memory (RAM) is selected, based on the internal PE command. This may be accomplished by converting the internal PE command into a PE RAM base address and an associated qualifier thereof. In a third pipeline stage, the entry within the PE RAM is converted to the set of concurrently performable actions, such as based on the PE RAM base address and its associate qualifier.Type: ApplicationFiled: December 30, 2002Publication date: July 1, 2004Inventors: Donald R. DeSota, Bruce M. Gilbert, Robert Joersz, Thomas D. Lovett, Maged M. Michael
-
Publication number: 20040128461Abstract: A hazard queue for a pipeline, such as a multiple-stage pipeline for transaction conversion, is disclosed. A transaction in the pipeline is determined to represent a hazard relative to another transaction, such as by evaluating the transaction against a hazard content-addressable memory (CAM). The hazard CAM can enforce various hazard rules, such as considering a transaction as active if it is referencing a memory line and is currently being processed within the pipeline, and ensuring that only one active transaction with a given coherent memory line is in the pipeline at a single time. In response to determining that a transaction is a hazard, the transaction is routed to a hazard queue, such as at the end of the pipeline. Once the hazard is released, the transaction re-enters the pipeline.Type: ApplicationFiled: December 30, 2002Publication date: July 1, 2004Applicant: International Business Machines CorporationInventors: Donald R. DeSota, Bruce M. Gilbert, Robert Joersz, Eric N. Lais, Maged M. Michael
-
Patent number: 6757783Abstract: There is provided a portable storage medium, based on USB standard, connected to a USB port of a host computer applying UFI protocol. The portable storage medium comprises a USB connector through which the storage medium is physically connected to the USB port of the host computer; at least one nonvolatile flash memory for storing data transmitted from the host computer; a program storage for storing a predetermined operation program based on USB and UFI; and a controller for controlling the entire operation of the storage medium based on the operation program stored in the program storage.Type: GrantFiled: November 28, 2001Date of Patent: June 29, 2004Assignee: Daesung EC&P Co., Ltd.Inventor: Young Sook Koh
-
Patent number: 6757799Abstract: In a packetized memory device, row and column address paths receive row and column addresses from an address capture circuit. Each of the row and column address paths includes a respective address latch that latches the row or column address from the address capture circuitry, thereby freeing the address capture circuitry to capture a subsequent address. The latched row and column addresses are then provided to a combining circuit. Additionally, redundant row and column circuits receive these latched addresses and indicate to the combining circuit whether or not to substitute a redundant row. The combining circuit, responsive to a strobe then transfers the redundant row address or latched row address to a decoder to activate the array.Type: GrantFiled: August 1, 2001Date of Patent: June 29, 2004Assignee: Micron Technology, Inc.Inventors: Chris G. Martin, Troy A. Manning
-
Patent number: 6751717Abstract: The present invention coordinates the execution of commands, received in response to a continuous system clock, with the receipt of data in response to a burst clock. Command capture logic receives command information in response to the system clock. A storage element is responsive to the command capture logic for storing certain command information such as write commands. A two stage pipeline receives the command information from the storage element in response to the burst clock and outputs the command information in response to the system clock. Methods of operating the apparatus are also disclosed.Type: GrantFiled: January 23, 2001Date of Patent: June 15, 2004Assignee: Micron Technology, Inc.Inventor: Brian Johnson
-
Publication number: 20040111567Abstract: A plurality of single port memories are provided for use with a single instruction multiple data processor. These are operable as a multi-port memory with simultaneous access to the plurality of single port memories. The apparatus is operable to send an access request for a plurality of memory locations to the locations in a known order. This request is then reordered to be suitable for application to the single port memories. The memories are then accessed and the data reordered to conform with the access request format.Type: ApplicationFiled: April 1, 2003Publication date: June 10, 2004Inventors: Adrian John Anderson, Gary Christopher Wass
-
Patent number: 6748497Abstract: An apparatus and method for memory transaction buffering are implemented. Read and write buffer units are provided. The read buffer unit is configured for storing at least one data value read from a memory device, and the write buffer unit is configured for storing at least one data value for writing to the memory device. The read buffer unit is operable for updating with the at least one data value for writing to the memory device in response to a write to the write buffer unit.Type: GrantFiled: November 20, 2001Date of Patent: June 8, 2004Assignee: Cirrus Logic, Inc.Inventors: Chang Yong Kang, Jun Hao
-
Patent number: 6748518Abstract: Disclosed is a processor, which reduces issuing of unnecessary barrier operations during instruction processing. The processor comprises an instruction sequencing unit and a load store unit (LSU) that issues a group of memory access requests that precede a barrier instruction in an instruction sequence. The processor also includes a controller, which in response to a determination that all of the memory access requests hit in a cache affiliated with the processor, withholds issuing on an interconnect a barrier operation associated with the barrier instruction. The controller further directs the load store unit to ignore the barrier instruction and complete processing of a next group of memory access requests following the barrier instruction in the instruction sequence without receiving an acknowledgment.Type: GrantFiled: June 6, 2000Date of Patent: June 8, 2004Assignee: International Business Machines CorporationInventors: Guy Lynn Guthrie, Ravi Kumar Arimilli, John Steven Dodson, Derek Edward Williams
-
Patent number: 6745308Abstract: A method and system are shown for bypassing memory controller components when processing memory requests. A memory controller analyzes internal components to determine if any pending memory requests exist. If particular memory controller components are idle, a memory client is informed that a bypassing of memory controller components is possible. A bypass module of the memory controller receives memory requests from the memory client. The bypass module examines memory controller parameters and a configuration of main memory to determine which memory controller components may be bypassed and routes the memory request accordingly. In a system with asynchronous memory, the memory controller provides copies of the memory request through a dual pipeline. A first copy of the memory request is processed through a bypass module to attempt to bypass memory controller components. A second copy of the memory request is processed in a normal fashion in case a bypass of the memory access request is not possible.Type: GrantFiled: February 19, 2002Date of Patent: June 1, 2004Assignee: ATI Technologies, Inc.Inventors: Michael Frank, Santiago Fernandez-Gomez, Robert W. Laker, Aki Niimura
-
Patent number: 6745314Abstract: A circular buffer control circuit, a method of controlling a circular buffer and a digital signal processor (DSP) incorporating the circuit or the method. In one embodiment, the circuit includes: (1) address calculation logic, having multiple datapaths, that calculates, from data regarding a buffer operation, an updated address result therefor and (2) modification order determination circuitry, coupled in parallel with the address calculation logic, that transmits a memory access request and the updated address result in an order that is based on whether the buffer operation is pre-modified or post-modified.Type: GrantFiled: November 26, 2001Date of Patent: June 1, 2004Assignee: LSI Logic CorporationInventor: Shannon A. Wichman
-
Patent number: 6745302Abstract: A semiconductor memory enabling a read modify write operation of data, comprising: a memory cell array including a plurality of memory cells arranged in a matrix and able to be written with and read out data; a read address decoding means for independently decoding an address of a read memory cell in response to a read address; a write address decoding means for independently decoding an address of a write memory cell in response to a write address; a data reading means for reading data of a memory cell addressed by the read address decoding means; a data writing means for writing data to a memory cell addressed by the write address decoding means; and an address delay means by which a write address decoded by the write address decoding means is delayed by a predetermined time from a read address decoded by the read address decoding means, wherein the predetermined time is set as a predetermined plurality of times of basic synchronization pulse periods so that the data read modify write operation is accomplisType: GrantFiled: September 13, 1999Date of Patent: June 1, 2004Assignee: Sony CorporationInventors: Kazuo Taniguchi, Masaharu Yoshimori
-
Patent number: 6745309Abstract: A memory controller which has multiple stages of pipelining. A request buffer is used to hold the memory request from the processor and peripheral devices. The request buffer comprises a set of rotational registers that holds the address, the type of transfer and the count for each request. The pipeline includes a decode stage, a memory address stage, and a data transfer stage. Each stage of the pipeline has a pointer to the request buffer. As each stage completes its processing, a state machine updates the pointer for each of the stages to reference a new memory request which needs to be processed.Type: GrantFiled: June 17, 2003Date of Patent: June 1, 2004Assignee: Micron Technology, Inc.Inventor: Joseph Jeddeloh
-
Publication number: 20040098552Abstract: A processor-based device (e.g., a wireless device) may include a processor and a semiconductor memory (e.g., a flash memory) to selectively pipeline and prefetch memory data, such as executable data, in one embodiment, using prefetch/pipeline logic that may enable storage of a first indication associated with executable data at a first storage location and a second indication associated with executable data at a second storage location. Upon retrieval, the prefetch/pipeline logic may selectively perform at least one of pipelining and prefetching of the executable data associated with the second storage location based on the first indication.Type: ApplicationFiled: November 20, 2002Publication date: May 20, 2004Inventor: Zafer Kadi
-
Controller architecture and strategy for small discontiguous accesses to high-density memory devices
Patent number: 6738874Abstract: A RAM device including a memory and a memory controller. The memory controller can be configured to buffer incoming requests, prioritize the requests into a final order, and submit the requests to the memory in the final order. The final order, as needed, is selected to maximize overlap of incoming requests' timing cycles.Type: GrantFiled: February 16, 2002Date of Patent: May 18, 2004Assignee: Layer N Networks, Inc.Inventor: Leslie Zsohar -
Patent number: 6735679Abstract: A method and apparatus for optimizing access to memory, wherein the method includes the steps of receiving a first request for access to a memory, receiving at least two additional requests for access to the memory, and determining a first clock overhead associated with the first request for access to the memory. The method further includes the steps of determining an additional clock overhead associated with each of the at least two additional requests for access to the memory in conjunction with the first request, determining a combination of requests that can be processed together using an optimized overhead, and processing the combination of requests as a single request with the optimal overhead.Type: GrantFiled: June 23, 2000Date of Patent: May 11, 2004Assignee: Broadcom CorporationInventors: Joseph Herbst, Allan Flippin
-
Patent number: 6735675Abstract: Increased efficiency in a multiple agent system is provided by allowing all explicit writebacks to continue during a snoop phase. Upon each incoming external bus request, an agent determines if the address of that request matches an address of data within the agent. If there is a match, the agent copies this most recent data, changes the state of the data to unmodified, changes the length of the data to zero (for pending explicit writebacks), and performs an implicit writeback. Additionally, prior to each explicit writeback, an agent determines if the address of the explicit writeback and any incoming snoop request requests are the same. If there is a match, the agent changes the data length of the explicit writeback to zero prior to issuing the explicit writeback.Type: GrantFiled: January 17, 2003Date of Patent: May 11, 2004Assignee: Intel CorporationInventors: Paul D. Breuder, Derek T. Bachand, David Lawrence Hill, Chinna Prudvi
-
Patent number: 6732247Abstract: Multi-ported pipelined memory is located on a processor die serving as an addressable on-chip memory for efficiently processing streaming data. The memory sustains multiple wide memory accesses per cycle, clocks synchronously with the rest of the processor, and stores a significant portion of an image. Such memory bypasses the register file directly providing data to the processor's functional units. The memory includes multiple memory banks which permit multiple memory accesses per cycle. The memory banks are connected in pipelined fashion to pipeline registers placed at regular intervals on a global bus. The memory sustains multiple transactions per cycle, at a larger memory density than that of a multi-ported static memory, such as a register file.Type: GrantFiled: January 17, 2001Date of Patent: May 4, 2004Assignee: University of WashingtonInventors: Stefan G. Berg, Donglok Kim, Yongmin Kim
-
Patent number: 6728873Abstract: Disclosed is a method of operation within a processor, that enhances speculative branch processing. A speculative execution path contains an instruction sequence that includes a barrier instruction followed by a load instruction. While a barrier operation associated with the barrier instruction is pending, a load request associated with the load instruction is speculatively issued to memory. A flag is set for the load request when it is speculatively issued and reset when an acknowledgment is received for the barrier operation. Data which is returned by the speculatively issued load request is temporarily held and forwarded to a register or execution unit of the data processing system after the acknowledgment is received. All process results, including data returned by the speculatively issued load instructions are discarded when the speculative execution path is determined to be incorrect.Type: GrantFiled: June 6, 2000Date of Patent: April 27, 2004Assignee: International Business Machines CorporationInventors: Guy Lynn Guthrie, Ravi Kumar Arimilli, John Steven Dodson, Derek Edward Williams
-
Publication number: 20040078515Abstract: A semiconductor memory device includes a bit line, a memory cell coupled to the bit line and a word line coupled to the memory cell. A first time between receiving a write command for a write operation in order to write data to the memory cell and the beginning of the write operation is different from a second time between receiving a refresh command for a refresh operation in order to refresh data stored in the memory cell and beginning the write operation.Type: ApplicationFiled: October 9, 2003Publication date: April 22, 2004Applicant: Kabushiki Kaisha ToshibaInventors: Kenji Tsuchida, Haruki Toda, Hitoshi Kuyama
-
Patent number: 6725348Abstract: A data storage device and method for improving the performance of data storage devices examines a command queue and performs data transfers to memory within the device before prior commands have completed. A process running in the idle loop of the controller in the storage device checks the queue for write requests and if a cache space within a dual-port cache to hold the transfer data is available, the data transfer portion of the transfer is completed, while the device is still waiting for completion of prior commands in the queue, and data transfers are completing from the cache to the physical media for the prior command.Type: GrantFiled: October 13, 1999Date of Patent: April 20, 2004Assignee: International Business Machines CorporationInventors: Louise Ann Marier, Brian Lee Morger, Christopher David Wiederholt
-
Patent number: 6718431Abstract: A memory device has interface circuitry and a memory core which make up the stages of a pipeline, each stage being a step in a universal sequence associated with the memory core. The memory device has a plurality of operation units such as precharge, sense, read and write, which handle the primitive operations of the memory core to which the operation units are coupled. The memory device further includes a plurality of transport units configured to obtain information from external connections specifying an operation for one of the operation units and to transfer data between the memory core and the external connections. The transport units operate concurrently with the operation units as added stages to the pipeline, thereby creating a memory device which operates at high throughput and with low service times under the memory reference stream of common applications.Type: GrantFiled: January 18, 2002Date of Patent: April 6, 2004Assignee: Rambus Inc.Inventors: Richard M. Barth, Ely K. Tsern, Mark A. Horowitz, Donald C. Stark, Craig E. Hampel, Frederick A. Ware, John B. Dillon
-
Patent number: 6718430Abstract: A window-based flash memory storage system and a management and an access method therefor are proposed. The window-based flash memory storage system includes a window-based region and a redundant reserved region; wherein the window-based region is used to store a number of windows, each window being associated with a number of physical blocks. The redundant reserved region includes a dynamic-link area, a window-information area, a dynamic-link information area, and an boot-information area; wherein the dynamic-link area includes a plurality of dynamic allocation blocks, each being allocatable to any window. The window-information area is used to store a specific window-information set that is dedicated to a certain window within a specific range of data storage space. The dynamic-link information area is used to record the status of the allocation of the dynamic allocation blocks to the windows.Type: GrantFiled: November 20, 2001Date of Patent: April 6, 2004Assignee: Solid State System Co., Ltd.Inventors: Chun-Hung Lin, Chih-Hung Wang, Chun-Hao Kuo
-
Publication number: 20040064663Abstract: The present invention relates to techniques for predicting memory access in a data processing apparatus and particular to a technique for determining whether a data item to be accessed crosses an address boundary and will hence require multiple memory accesses.Type: ApplicationFiled: October 1, 2002Publication date: April 1, 2004Inventor: Richard Roy Grisenthwaite
-
Publication number: 20040064662Abstract: A bus interface unit is provided for a digital signal processor including a core processor, a memory and two or more system buses for transfer of data to and from system components. The bus interface unit includes a first bus controller for receiving processor transfer requests from the core processor on two or more processor buses and for directing the processor transfer requests to the memory on a first memory bus. The bus interface further includes a second bus controller for receiving system transfer requests from the system components on the two or more system buses and for directing the system transfer requests to the memory on a second memory bus. The bus controllers may have pipelined architectures and may be configured to service transfer requests independently.Type: ApplicationFiled: September 26, 2002Publication date: April 1, 2004Applicant: Analog Devices, Inc.Inventors: Moinul I. Syed, Michael S. Allen
-
Patent number: 6715042Abstract: A multiprocessor digital amplifier system is disclosed. A first processor is configured to decode a digital signal from a digital signal source. A second processor configured to provide control signals to the first processor. An expansion unit for communicating instructions and data between the processors and a memory device has a first port coupled to the first processor and a second port coupled to the second processor. The expansion unit includes a state generator with circuitry for selecting one of the first and second ports for receiving a memory device access grant. The first and second ports may be granted access in accordance with a selected arbitration protocol. A duration of the memory device access grant selectably constitutes one of a preselected number of accesses and a preselected timeslice. An amplifier amplifies the decoded digital signal from the first processor.Type: GrantFiled: October 4, 2001Date of Patent: March 30, 2004Assignee: Cirrus Logic, Inc.Inventors: Nadeem Mirza, Jun Hao
-
Publication number: 20040059874Abstract: A memory architecture is disclosed. A memory device may comprise at least two memory blocks electrically coupled in a pipelined manner. Each block may comprise a memory array, and a bypass network. A system may include several memory blocks coupled together in a pipelined manner electrically coupled to at least two functional units.Type: ApplicationFiled: September 24, 2002Publication date: March 25, 2004Applicant: Intel CorporationInventors: Robert James Murray, Mark Duanne Nardin
-
Patent number: 6711648Abstract: The present invention includes a cost efficient method of substantially increasing the data bandwidth of a dynamic random access memory (DRAM) device initially configured to operate in an extended data output (EDO) mode, the EDO DRAM device including at least one storage cell, a column decoder, an internal read/write data bus and an off chip driver latch, the column decoder decoding a column address upon receipt thereof such that data stored in the at least one storage cell corresponding to the decoded column addresses is placed on the internal read/write data bus in response to the receipt of an address transition detection (ATD) pulse generated by the dynamic memory device and further wherein output data is stored in the off chip driver latch in response to a transfer pulse.Type: GrantFiled: March 28, 1997Date of Patent: March 23, 2004Assignee: Siemens Aktiengesellschaft Kabushiki Kaisha ToshibaInventors: Peter Poechmueller, Yohji Watanabe
-
Publication number: 20040054864Abstract: The present invention provides a memory controller and memory controlling method for controlling transfers to or from a memory device of a type where each transfer comprises a sequence of distinct phases and the actual sequence of distinct phases is dependent on the type of transfer. In a particularly preferred embodiment, the memory device is a NAND flash memory device. The memory controller comprises a memory device interface operable to couple the memory controller with the memory device, a number of programmable timing registers programmable to store timing information appropriate for the memory device whose transfers are to be controlled by the memory controller, and a number of programmable control registers which, prior to each transfer, are programmable to define the actual sequence of distinct phases to be performed for that transfer and one or more control values for that transfer.Type: ApplicationFiled: September 13, 2002Publication date: March 18, 2004Inventor: Neil Andrew Jameson
-
Patent number: 6707754Abstract: A memory core with an access time that does not include a delay associated with decoding address information. Address decode logic is removed from the memory core and the address decode operation is performed in an addressing pipeline stage that occurs during a clock cycle prior to a clock cycle associated with a memory access operation for the decoded address. After decoding the address in a first pipeline stage, the external decode logic drives word lines connected to the memory core in a subsequent pipeline stage. Since the core is being driven by word lines, the appropriate memory locations are accessed without decoding the address information within the core. Thus, the delay associated with decoding the address information is removed from the access time of the memory core.Type: GrantFiled: October 15, 2002Date of Patent: March 16, 2004Assignee: Micron Technology, Inc.Inventor: Graham Kirsch
-
Patent number: 6708264Abstract: A synchronous memory device includes a prefetch address counter. The address counter is composed of an n number of one-bit counter circuits, an n number of adders to which the output signals of these counters are supplied respectively, and an adder control circuit for controlling each adder. A start address is externally supplied to each of the one-bit counter circuits, which in turn count up. When the addressing mode is the sequential mode and the start address is an odd address, each adder performs addition according to the state of the even control signal outputted from the adder control circuit. With the addition, the address outputted from each one-bit counter circuit is inverted, but otherwise the same signal as the address outputted from each one-bit counter circuit is outputted.Type: GrantFiled: June 14, 2000Date of Patent: March 16, 2004Assignee: Kabushiki Kaisha ToshibaInventors: Katsumi Abe, Hiroyuki Ohtake
-
Publication number: 20040044865Abstract: A disaster-tolerant data backup and remote copy system which is implemented as a controller-based replication of one or more LUNs (logical units) between two remotely separated pairs of array controllers connected by redundant links. The system provides a method for allowing a large number of commands to be ‘outstanding’ in transit between local and remote sites while ensuring the proper ordering of commands on remote media during asynchronous or synchronous data replication. In addition, the system provides a mechanism for automatic ‘tuning’ of links based on the distance between the array controllers.Type: ApplicationFiled: August 29, 2003Publication date: March 4, 2004Inventors: Stephen J. Sicola, Susan G. Elkington, Michael D. Walker, Richard F. Lary
-
Publication number: 20040044870Abstract: Systems and methods for reducing delays between successive write and read accesses in multi-bank memory devices are provided. Computer circuits modify the relative timing between addresses and data of write accesses, reducing delays between successive write and read accesses. Memory devices that interface with these computer circuits use posted write accesses to effectively return the modified relative timing to its original timing before processing the write access.Type: ApplicationFiled: August 28, 2002Publication date: March 4, 2004Applicant: Micron Technology, Inc.Inventor: J. Thomas Pawlowski
-
Patent number: 6694416Abstract: Systems, devices, and methods. A double data rate memory device includes a storage element, a first pipeline, and a second pipeline. The pipelines are connected to the storage unit to pass or output data on rising and falling edges of an external clock signal. The device permits data transferring at dual data rates. Another memory device includes a storage element and a plurality of pipelines for transferring data. The plurality of pipelines each pass data on different events.Type: GrantFiled: September 2, 1999Date of Patent: February 17, 2004Assignee: Micron Technology, Inc.Inventors: Mark R. Thomann, Wen Li
-
Publication number: 20040024957Abstract: A window-based flash memory storage system and a management and an access method therefor are proposed. The window-based flash memory storage system includes a window-based region and a redundant reserved region; wherein the window-based region is used to store a number of windows, each window being associated with a number of physical blocks. The redundant reserved region includes a dynamic-link area, a window-information area, a dynamic-link information area, and an boot-information area; wherein the dynamic-link area includes a plurality of dynamic allocation blocks, each being allocatable to any window. The window-information area is used to store a specific window-information set that is dedicated to a certain window within a specific range of data storage space. The dynamic-link information area is used to record the status of the allocation of the dynamic allocation blocks to the windows.Type: ApplicationFiled: July 31, 2003Publication date: February 5, 2004Inventors: Chun-Hung Lin, Chih-Hung Wang, Chun-Hao Kuo
-
Patent number: 6687809Abstract: An apparatus in a first processor includes a first data structure to store addresses of store instruction dispatched during a last predetermined number of cycles. The apparatus further includes logic to determine whether a load address of a load instruction being executed matches one of the store addresses in the first data structure. The apparatus still further includes logic to replay to the respective load instruction if the load address of the respective load instruction matches of the store addresses in the first data structure.Type: GrantFiled: October 24, 2002Date of Patent: February 3, 2004Assignee: Intel CorporationInventors: Muntaquim F. Chowdhury, Douglas M. Carmean
-
Patent number: 6687763Abstract: The present invention provides an ATAPI command receiving method in which the CPU 72 can quickly correspond to other processings without expending much time for capturing data, as well as it can be prevented that data which are being captured by the CPU should be destroyed. In this ATAPI command receiving method, when an ATAPI protocol control LSI 71 comprising a shared register storage area 711 (including a data FIFO 7112 for containing command packets) for receiving a command from the host computer via an ATA bus 2, and a buffer memory 712 which can be used as a RAM of a CPU 72 receives a command, shared register values (including a command packet value) are stored at a storage destination address in the buffer memory 712 which is designated by the CPU 72, when the data storage permission is given by the CPU 72.Type: GrantFiled: February 20, 2001Date of Patent: February 3, 2004Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Yoko Kimura, Yasushi Ueda
-
Patent number: 6687803Abstract: A processor architecture including a processor and local memory arrangement where the local memory may be accessed by the processor and other resources at substantially the same time. As a result, the processor may initiate a new or current process following a previous process without waiting for data or instructions from external resources. In addition, the loading of data for the next or subsequent process, the execution of a current process, and the extraction of results of a previous process can occur in parallel. Further, the processor may avoid memory load stall conditions because the processor does not have to access an external memory to execute the current process. In another embodiment, the local memory may be dynamically reallocated so that results from a previous process stored in the local memory may be accessed by the processor for a current process without accessing an external memory.Type: GrantFiled: March 2, 2001Date of Patent: February 3, 2004Assignee: Agere Systems, Inc.Inventor: David P Sonnier
-
Publication number: 20040019764Abstract: A method for processing data is provided that includes storing a write operation in a store buffer that indicates a first data element is to be written to a memory array element. The write operation includes a first address associated with a location in the memory array element to where the first data element is to be written. A read operation may be received at the store buffer, indicating that a second data element is to be read from the memory array element. The read operation includes a second address associated with a location in the memory array element from where the second data element is to be read. A hashing operation may be executed on the first and second addresses such that first and second hashed addresses are respectively produced. The hashed addresses are compared. If they match, the first data element is written to the memory array element before the read operation is executed.Type: ApplicationFiled: July 25, 2002Publication date: January 29, 2004Applicant: Texas Instruments IncorporatedInventors: Donald E. Steiss, Zheng Zhu
-
Publication number: 20040015665Abstract: In one embodiment, interleaved signals in a receiver are accessed by memory pointers and delivered to data stream locations without the need to transfer data to an intermediate physical buffer.Type: ApplicationFiled: July 19, 2002Publication date: January 22, 2004Inventor: Amit Dagan
-
Publication number: 20040010672Abstract: A system and method are described for a memory management processor which, using a table of reference addresses embedded in the object code, can open the appropriate memory pages to expedite the retrieval of information from memory referenced by instructions in the execution pipeline. A suitable compiler parses the source code and collects references to branch addresses, calls to other routines, or data references, and creates reference tables listing the addresses for these references at the beginning of each routine. These tables are received by the memory management processor as the instructions of the routine are beginning to be loaded into the execution pipeline, so that the memory management processor can begin opening memory pages where the referenced information is stored. Opening the memory pages where the referenced information is located before the instructions reach the instruction processor helps lessen memory latency delays which can greatly impede processing performance.Type: ApplicationFiled: July 11, 2002Publication date: January 15, 2004Inventor: Dean A. Klein
-
Publication number: 20040003192Abstract: Briefly, in accordance with an embodiment of the invention, a method and system to retrieve information from a memory is provided. The method may include transferring information from the memory in response to at least two synchronous burst read requests using pipelining.Type: ApplicationFiled: June 26, 2002Publication date: January 1, 2004Inventors: Shekoufeh Qawami, Chaitanya S. Rajguru, Sanjay S. Talreja
-
Publication number: 20030236960Abstract: A controller that supports both aligned and unaligned PIO data transfers associated with ATAPI devices in a fashion that reduces command overhead to improve ATAPI device system performance. A 32-bit wide sector FIFO, implemented with a 32-bit single port RAM using read and write pointer control logic, is used to store packet data transmitted to and received from the other data bus (i.e. USB). The 32-bit single port RAM functions as a FIFO to allow both the USB side and the ATAPI side to simultaneously access the sector FIFO.Type: ApplicationFiled: June 24, 2002Publication date: December 25, 2003Inventor: Brian Tse Deng
-
Patent number: 6658523Abstract: In a high speed memory subsystem differences in each memory device's minimum device read latency and differences in signal propagation time between the memory device and the memory controller can result in widely varying system read latencies. The present invention equalizes the system read latencies of every memory device in a high speed memory system by comparing the differences in system read latencies of each device and then operating each memory device with a device system read latency which causes every device to exhibit the same system read latency.Type: GrantFiled: March 13, 2001Date of Patent: December 2, 2003Assignee: Micron Technology, Inc.Inventors: Jeffery W. Janzen, Brent Keeth, Kevin J. Ryan, Troy A. Manning, Brian Johnson
-
Patent number: 6658545Abstract: An apparatus and technique to allow internal bus activity of a system on a chip to be monitored external to the integrated circuit, but without requiring additional external pins. A snooping pass through device on the internal bus, e.g., a snooping external memory interface (EMI) includes operability to directly pass through activity on the internal bus to the external memory bus. One or more snoop cycles are inserted into a memory access of an internal bus of a system on a chip. The snooping pass through device preferably includes an external bus already having pins routed external to the integrated circuit. The external bus leading from the snooping pass through device (e.g., from the EMI) may be multiplexed for use both for its otherwise conventional function while not in a snoop cycle, and for use in directly observing activity on the internal bus when during a snoop cycle. Additional signals may be multiplexed into the EMI for pass through during snoop cycles.Type: GrantFiled: February 16, 2000Date of Patent: December 2, 2003Assignee: Lucent Technologies Inc.Inventor: Surender Dayal
-
Publication number: 20030221048Abstract: A synchronous flash memory includes an array of non-volatile memory cells. The memory device has a package configuration that is compatible with an SDRAM. The memory device includes a pipelined buffer with selectable propagation paths to route data from the input connection to the output connection. Each propagation path requires a predetermined number of clock cycles. The non-volatile synchronous memory includes circuitry to route both memory data and register data through the pipelined output buffer to maintain consistent latency for both types of data.Type: ApplicationFiled: February 14, 2003Publication date: November 27, 2003Applicant: Micron Technology, Inc.Inventor: Frankie F. Roohparvar