Memory Access Pipelining Patents (Class 711/169)
  • Publication number: 20110246688
    Abstract: Embodiments of the invention describe arbitrating requests received from a plurality of agents for memory. Each memory request may indicate a priority level of the memory request and a size of the memory to be accessed. Said requests may be stored in a queue. Arbitration logic, coupled to the plurality of agents and the queue, may receive said memory requests and determine which requests to send to the queue based, at least in part, on the priority of each request and the size of the memory to be accessed by each memory request.
    Type: Application
    Filed: April 1, 2010
    Publication date: October 6, 2011
    Inventors: IRWIN VAZ, ROHIT NATARAJAN, ALOK MATHUR, SURI MEDAPATI
  • Publication number: 20110238941
    Abstract: A data processing system employs an improved arbitration process in selecting pending memory access requests received from the one or more processor cores for servicing by the memory. The arbitration process uses memory timing and state information pertaining both to memory access requests already submitted to the memory for servicing and to the pending memory access requests which have not yet been selected for servicing by the memory. The memory timing and state information may be predicted memory timing and state information; that is, the component of the data processing system that implements the improved scheduling algorithm may not be able to determine the exact point in time at which a memory controller initiates a memory access for a corresponding memory access request and thus the component maintains information that estimates or otherwise predicts the particular state of the memory at any given time.
    Type: Application
    Filed: March 29, 2010
    Publication date: September 29, 2011
    Applicant: FREESCALE SEMICONDUCTOR, INC.
    Inventors: Kun Xu, David B. Kramer
  • Patent number: 8028130
    Abstract: A method and apparatus for implementation of a pipeline structure for data transfer. A request is received from a first domain to access a second domain during a first clock cycle. A pipeline structure is used to perform at least a portion of the request during a subsequent clock cycle.
    Type: Grant
    Filed: July 21, 2004
    Date of Patent: September 27, 2011
    Assignee: Oracle America, Inc.
    Inventors: Steven F. Weiss, Andrew E. Phelps, Patricia Shanahan
  • Patent number: 8015366
    Abstract: A method for communicating between nodes of a plurality of nodes is disclosed. Each node includes a plurality of processors and an interconnect chipset. The method issues a request for data from a processor in a first node and passes the request for data to other nodes through an expansion port (or scalability port). The method also starts an access of a memory in response to the request for data and snoops a processor cache of each processor in each node. The method accordingly identifies the location of the data in either the processor cache or memory in the node having the processor issuing the request or in a processor cache or memory of another node.
    Type: Grant
    Filed: July 24, 2008
    Date of Patent: September 6, 2011
    Assignee: Fujitsu Limited
    Inventors: James C. Wilson, Wolf-Dietrich Weber
  • Patent number: 8006015
    Abstract: A device and a method for managing access requests, the method includes: (i) receiving, from a master component coupled to a master bus, multiple access requests to access a slave component over a pipelined slave bus; acknowledging a received access request if: (a) at least an inter-access request delay period lapsed from a last acknowledgement of an access request; (b) an amount of pending acknowledged access requests is below a threshold; wherein the threshold is determined in response to a pipeline depth of the pipelined slave bus; (c) the received access request is valid; wherein a validity of an access request is responsive to a reception of an access request cancellation request; and (ii) providing information from the slave component, in response to at least one acknowledged access request.
    Type: Grant
    Filed: November 8, 2006
    Date of Patent: August 23, 2011
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Yaki Devilla, Moshe Anschel, Kostantin Godin, Amit Gur, Itay Peled
  • Patent number: 8006044
    Abstract: Some embodiments of the invention pertain to a memory system containing multiple memory devices, in which one or multiple ones of the memory devices may flexibly be selected at one time for a common operation to be performed by all the selected devices concurrently.
    Type: Grant
    Filed: December 21, 2006
    Date of Patent: August 23, 2011
    Assignee: Intel Corporation
    Inventors: Shekoufeh Qawami, Rodney R. Rozman, Sean S. Eilert
  • Patent number: 7996623
    Abstract: Method and apparatus for managing the storage of data in a cache memory by placing pending read requests for sequential data in a dedicated read ahead stream control (RASC) data structure, and further configured for dynamically switching both ways, in response to data stored in the RASC, between speculative non-requested read ahead data streaming to read behind stream locking on the read requests in the RASC.
    Type: Grant
    Filed: June 30, 2006
    Date of Patent: August 9, 2011
    Assignee: Seagate Technology LLC
    Inventor: Michael D. Walker
  • Publication number: 20110185146
    Abstract: A method for accessing a memory includes receiving a first address wherein the first address corresponds to a demand fetch, receiving a second address wherein the second address corresponds to a speculative prefetch, providing first data from the memory in response to the demand fetch in which the first data is accessed asynchronous to a system clock, and providing second data from the memory in response to the speculative prefetch in which the second data is accessed synchronous to the system clock. The memory may include a plurality of pipeline stages in which providing the first data in response to the demand fetch is performed such that each pipeline stage is self-timed independent of the system clock and providing the second data in response to the speculative prefetch is performed such that each pipeline stage is timed based on the system clock to be synchronous with the system clock.
    Type: Application
    Filed: January 22, 2010
    Publication date: July 28, 2011
    Inventors: Timothy J. Strauss, David W. Chrudimsky, William C. Moyer
  • Patent number: 7953946
    Abstract: Controlling data retention of a collection of data in a data store. An instruction is received to store a shadow collection of data to the data store. The data store has a previous version of the shadow collection of data. An available amount of data storage space on the data store is identified. An amount of data storage space needed is estimated for storing the shadow collection of data to the data store based on the received instruction. It is determined whether the identified available amount of data storage space is sufficient for storing the estimated amount of data storage space. The shadow collection of data is stored to the data store when said determine indicates that the identified available amount of data storage space is sufficient and the previous version is permitted to be deleted or to be overwritten.
    Type: Grant
    Filed: April 16, 2007
    Date of Patent: May 31, 2011
    Assignee: Microsoft Corporation
    Inventors: Karandeep Singh Anand, Vijay Sen, Abid Ali, Manoj K. Valiyaparambil
  • Patent number: 7941627
    Abstract: An instruction set architecture (ISA) includes an asynchronous memory move (AMM) synchronization (SYNC) instruction. When processor of a data processing system executes the AMM SYNC instruction, the processor prevents an AMM operation generated by a subsequently received/executed AMM ST instruction from proceeding with the data move portion of the AMM operation within the memory subsystem until completion of all ongoing memory access operations within the memory subsystem and fabric. The AMM operation does not wait for a normal barrier operation. The processor forwards the information relevant to initiate the AMM operation to an asynchronous memory mover logic, and signals the logic to not proceed with the AMM operation until signaled of the completion of the AMM SYNC.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: May 10, 2011
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Robert S. Blackmore, Chulho Kim, Balaram Sinharoy, Hanhong Xue
  • Patent number: 7916554
    Abstract: Systems and methods for reducing delays between successive write and read accesses in multi-bank memory devices are provided. Computer circuits modify the relative timing between addresses and data of write accesses, reducing delays between successive write and read accesses. Memory devices that interface with these computer circuits use posted write accesses to effectively return the modified relative timing to its original timing before processing the write access.
    Type: Grant
    Filed: April 24, 2007
    Date of Patent: March 29, 2011
    Assignee: Round Rock Research, LLC
    Inventor: J. Thomas Pawlowski
  • Patent number: 7908425
    Abstract: In a read method for a memory device, a bit line is set with data in a first memory cell; and the data on the bit line is stored in a register. The data in the register is transferred to a data bus while setting the bit line with data in a second memory cell. In another read method for a memory device, a bit line of a first memory cell is initialized and the bit line is pre-charged with a pre-charge voltage. Data in a memory cell on the bit line is developed, and a register corresponding to the bit line is initialized. The data on the bit line is stored in the register. The data in the register is output externally while performing the initializing, pre-charging, making and initializing to set the bit line with data in a second memory cell.
    Type: Grant
    Filed: June 27, 2008
    Date of Patent: March 15, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin-Yub Lee, Sang-Won Hwang
  • Patent number: 7908452
    Abstract: A computer system includes a memory hub controller coupled to a plurality of memory modules. The memory hub controller includes a memory request queue that couples memory requests and corresponding request identifier to the memory modules. Each of the memory modules accesses memory devices based on the memory requests and generates response status signals from the request identifier when the corresponding memory request is serviced. These response status signals are coupled from the memory modules to the memory hub controller along with or separate from any read data. The memory hub controller uses the response status signal to control the coupling of memory requests to the memory modules and thereby control the number of outstanding memory requests in each of the memory modules.
    Type: Grant
    Filed: April 5, 2010
    Date of Patent: March 15, 2011
    Assignee: Round Rock Research, LLC
    Inventors: Joseph M. Jeddeloh, Terry R. Lee
  • Publication number: 20110055510
    Abstract: A method and program product for processing data by a pipeline of a single hardware-implemented virtual multiple instance finite state machine (VMI FSM). An input token of multiple input tokens is selected to enter a pipeline of the VMI FSM. The input token includes a reference to an FSM instance. In one embodiment, the reference is an InfiniBand QP number. After being received at the pipeline, a current state and context of the FSM instance are fetched from an array based on the reference and inserted into a field of the input token. A new state of the FSM instance is determined and an output token is generated. The new state and the output token are based on the current state, context, a first input value, and an availability of a resource. The new state of the first FSM instance is written to the array.
    Type: Application
    Filed: August 25, 2009
    Publication date: March 3, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rolf K. Fritz, Andreas Muller, Thomas Schlipf, Daniel Thiele
  • Patent number: 7895401
    Abstract: We propose a new form of software transactional memory (STM) designed to support dynamic-sized data structures, and we describe a novel non-blocking implementation. The non-blocking property we consider is obstruction-freedom. Obstruction-freedom is weaker than lock-freedom; as a result, it admits substantially simpler and more efficient implementations. An interesting feature of our obstruction-free STM implementation is its ability to use of modular contention managers to ensure progress in practice.
    Type: Grant
    Filed: December 20, 2007
    Date of Patent: February 22, 2011
    Assignee: Oracle America, Inc.
    Inventors: Mark S. Moir, Victor M. Luchangco, Maurice Herlihy
  • Patent number: 7890670
    Abstract: DMA transfer completion notification includes: inserting, by an origin DMA engine on an origin node in an injection first-in-first-out (‘FIFO’) buffer, a data descriptor for an application message to be transferred to a target node on behalf of an application on the origin node; inserting, by the origin DMA engine, a completion notification descriptor in the injection FIFO buffer after the data descriptor for the message, the completion notification descriptor specifying a packet header for a completion notification packet; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; sending, by the origin DMA engine, the completion notification packet to a local reception FIFO buffer using a local memory FIFO transfer operation; and notifying, by the origin DMA engine, the application that transfer of the message is complete in response to receiving the completion notification packet in the local reception FIFO buffer.
    Type: Grant
    Filed: May 9, 2007
    Date of Patent: February 15, 2011
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, Jeffrey J. Parker
  • Patent number: 7882325
    Abstract: A single micro-instruction to perform either an N-bit or a 2N-bit load is provided. A microprocessor having an N-bit load port performs either an N-bit load or a 2N-bit load in a single cycle with the same micro-instruction being used for both the N-bit and the 2N-bit load.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: February 1, 2011
    Assignee: Intel Corporation
    Inventors: Zeev Sperber, Robert Valentine, Ehud Cohen, Doron Orenstien, Benny Eitan
  • Patent number: 7882296
    Abstract: Circuits, apparatus, and methods for avoiding deadlock conditions in a bus fabric. One exemplary embodiment provides an address decoder for determining whether a received posted request is a peer-to-peer request. If it is, the posted request is sent as a non-posted request. A limit on the number of pending non-posted requests is maintained and not exceed, such that deadlock is avoided. Another exemplary embodiment provides an arbiter that tracks a number of pending posted requests. When the number pending posted requests reaches a predetermined or programmable level, a Block Peer-to-Peer signal is sent to the arbiter's clients, again avoiding deadlock.
    Type: Grant
    Filed: December 9, 2008
    Date of Patent: February 1, 2011
    Assignee: NVIDIA Corporation
    Inventor: David G. Reed
  • Patent number: 7877566
    Abstract: A read command protocol and a method of accessing a nonvolatile memory device having an internal cache memory. A memory device configured to accept a first and second read command, outputting a first requested data while simultaneously reading a second requested data. In addition, the memory device may be configured to send or receive a confirmation indicator.
    Type: Grant
    Filed: January 25, 2005
    Date of Patent: January 25, 2011
    Assignee: Atmel Corporation
    Inventor: Vijaya P. Adusumilli
  • Patent number: 7873757
    Abstract: A direct memory access controller for controlling data transfer between a plurality of data sources and a plurality of data destinations is disclosed. The plurality of data sources and data destinations communicate with the direct memory access controller via a plurality of channels, the direct memory access controller further communicates with a memory and a processor. The memory stores two sets of control data for each of the plurality of channels and for the processor. The direct memory access controller is responsive to a data transfer request received from one of said plurality of channels or from said processor to access one set of said corresponding control data stored in said memory, said direct memory access performing at least a portion of said data transfer requested in dependence upon said accessed control data.
    Type: Grant
    Filed: February 16, 2007
    Date of Patent: January 18, 2011
    Assignee: ARM Limited
    Inventors: Paul Kimelman, Edmond John Simon Ashfield, Steven Richard Mellor, Ian Field
  • Patent number: 7870351
    Abstract: Systems and methods for controlling memory access operation are disclosed. The system may include one or more requestors performing requests to memory devices. Within a memory controller, a request queue receives requests from a requestor, a bank decoder determines a destination bank, and the request is placed in an appropriate bank queue. An ordering unit determines if the current request can be reordered relative to the received order and generates a new memory cycle order based on the reordering determination. The reordering may be based on whether there are multiple requests to the same memory page, multiple reads, or multiple writes. A memory interface executes each memory request in the memory cycle order. A data buffer holds write data until it is written to the memory and read data until it is returned to the requestor. The data buffer also may hold memory words used in read-modify-write operations.
    Type: Grant
    Filed: November 15, 2007
    Date of Patent: January 11, 2011
    Assignee: Micron Technology, Inc.
    Inventor: David R. Resnick
  • Patent number: 7865685
    Abstract: An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it.
    Type: Grant
    Filed: February 13, 2009
    Date of Patent: January 4, 2011
    Assignee: Mosaid Technologies Incorporated
    Inventor: Ian Mes
  • Patent number: 7865657
    Abstract: A method and device for copying-back data in a multi-chip flash memory device having first and second memory chips. The method may include reading first source data from a first source region of one of the memory chips; programming the first source data into a target region included in one of the memory chips and reading second source data from second source region of the other memory chip different from the memory chip including the target region. Reading the second source data may be carried out while programming the first source data.
    Type: Grant
    Filed: December 28, 2006
    Date of Patent: January 4, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: In-Young Kim, Young-Joon Choi, Jong-Hwa Kim, Soon-Young Kim
  • Patent number: 7844769
    Abstract: A memory system having a data bus coupling a memory controller and a memory. The data bus has a number of data bus bits. The data bus is programmably apportioned to a first portion dedicated to transmitting data from the memory controller to the memory and a second portion dedicated to transmitting data from the memory to the memory controller. The apportionment can be assigned by suitable connection of pins on a memory chip in the memory and the memory controller to logical values. Alternatively, the apportionment can be scanned into the memory controller and the memory at bring up time. In another alternative, the apportionment can be changed by suspending data transfer and dynamically changing the sizes of the first portion and the second portion.
    Type: Grant
    Filed: July 26, 2006
    Date of Patent: November 30, 2010
    Assignee: International Business Machines Corporation
    Inventors: Gerald Keith Bartley, Darryl John Becker, John Michael Borkenhagen, Paul Eric Dahlen, Philip Raymond Germann, Andrew Benson Maki, Mark Owen Maxson
  • Patent number: 7836267
    Abstract: A method for backing up a computer-readable data file with a computerized backup application, where the data file is open and locked for exclusive access by an owning application that is mutually independent of the backup application. The backup application by-passes the lock on the data file. Any write operations to the data file by the owning application are intercepted and delayed. The address range of any write operations directed to the data file by the owning application during the backup procedure are written to a change log file, where the change log file contains no indication of the content of the write operations. At least a portion of the data file is copied to a backup file, and any intercepted and delayed write operations are passed on to the data file after the data to be overwritten has been copied to the backup file.
    Type: Grant
    Filed: August 30, 2007
    Date of Patent: November 16, 2010
    Assignee: Barracuda Networks Inc
    Inventor: Kenneth J. Cross
  • Patent number: 7830731
    Abstract: A semiconductor memory device includes a pipe latch unit having a plurality of pipe latches for latching data. An input controller controls input timing of data transmitted from data line to the pipe latch unit. An output controller controls output timing of data latched in the pipe latch unit. An initialization controller controls the input controller and the output controller to thereby initialize the pipe latch unit in response to a read/write flag signal which is activated during a write operation.
    Type: Grant
    Filed: October 29, 2008
    Date of Patent: November 9, 2010
    Assignee: Hynix Semiconductor Inc.
    Inventors: Kyoung-Nam Kim, Ho-Youb Cho
  • Patent number: 7827337
    Abstract: A device and a method for sharing a memory interface are disclosed. According to preferred embodiments of the present invention, a supplementary control unit included in a digital processor can control some of the pins, constituting a memory interface, to be shared by a plurality of memory. With the present invention, the number of pins included in a memory interface can be minimized, thereby reducing the size of a supplementary control unit, saving the manufacturing cost, and improving the processing efficiency.
    Type: Grant
    Filed: March 8, 2006
    Date of Patent: November 2, 2010
    Assignee: Mtekvision Co., Ltd.
    Inventor: Jong-Sik Jeong
  • Patent number: 7823159
    Abstract: A computing system that includes one or more processing elements, a memory connected to a host processor and a multitask controller, where the multitask controller includes a scheduler unit, a data flow unit, an executive unit, and a resource manager unit. The processing elements, the scheduler unit, the data flow unit, the executive unit, and the resource manager unit are each synchronously clocked by a clock signal. The processing elements, multitask controller interface of the memory, the executive unit, and the scheduler unit are each operative to change one or more interface signals on a positive transition of the clock signal while the resource manager unit and dataflow unit are each operative to change one or more interface signals on a negative transition of the clock signal. Because adjacent units are clocked on opposite edges, the speed of transfer of information between the units is improved.
    Type: Grant
    Filed: December 3, 2004
    Date of Patent: October 26, 2010
    Inventor: Edwin E. Klingman
  • Patent number: 7808825
    Abstract: When performing a program operation, a non-volatile memory device comprising a multi-plane performs a cache write operation by employing a page buffer circuit of a plane that does not perform the program operation. A data line mux transfers an externally input first data to a page buffer unit of a plane, which will be programmed, according to a plane select signal, transfers a second data to a page buffer unit of a plane on which a program operation is not performed, while the program of the selected plane is performed, and after the first data is programmed, provides a data transfer path between one page buffer unit and the other page buffer unit according to a data transfer control signal.
    Type: Grant
    Filed: June 27, 2008
    Date of Patent: October 5, 2010
    Assignee: Hynix Semiconductor Inc.
    Inventor: Won Sun Park
  • Patent number: 7810013
    Abstract: In some embodiments, a chip includes a memory core, a write buffer, transmitters, receivers to receive groups of signals including write data signals and associated error detection signals, and circuitry to provide the error detection signals to the transmitters to be transmitted to another chip and to provide the write data signals to the write buffer. The write data signals are held in the write buffer at least until it is determined whether their associated transmitted error detection signals match corresponding error detection signals stored in the other chip. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 30, 2006
    Date of Patent: October 5, 2010
    Assignee: Intel Corporation
    Inventor: Kuljit S. Bains
  • Publication number: 20100217928
    Abstract: An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it.
    Type: Application
    Filed: May 4, 2010
    Publication date: August 26, 2010
    Inventor: Ian Mes
  • Publication number: 20100211935
    Abstract: In general, methods and apparatus for implementing a Quality of Service (QoS) model are disclosed. A Quality of Service (QoS) contract with an initiating network device may be satisfied. A request may be received from the initiating network device in a first time less than or equal to an ordinal number times an arrival interval. The ordinal number signifies a position of the request among a group of requests. The request that has been serviced may be returned to the initiator in a second time less than or equal to a constant term plus the ordinal number times a service interval.
    Type: Application
    Filed: February 16, 2010
    Publication date: August 19, 2010
    Applicant: SONICS, INC.
    Inventors: Wolf-Dietrich Weber, Chien-Chun Chou, Drew E. Wingard
  • Publication number: 20100199056
    Abstract: Efficient and convenient storage systems and methods are presented. In one embodiment, a fractured erase process is performed in which a pre-program process, erase process and soft program process are initiated independently. Memory cells can be pre-programmed and conditioned independent of an erase command. The initiation of the independent pre-programming is partitioned from an erase command which is partitioned from initiation of a soft-programming command. A cell is erased wherein the erasing includes erase operations that are partitioned from the pre-preprogramming process. In one embodiment, the independent pre-program process is run in the background.
    Type: Application
    Filed: February 5, 2009
    Publication date: August 5, 2010
    Inventors: Clifford A. ZITLAW, Hagop Artin NAZARIAN
  • Patent number: 7761682
    Abstract: The present invention generally relates to memory controllers operating in a system containing a variable system clock. The memory controller may exchange data with a processor operating at a variable processor clock frequency. However the memory controller may perform memory accesses at a constant memory clock frequency. Asynchronous buffers may be provided to transfer data across the variable and constant clock domains. To prevent read buffer overflow while switching to a lower processor clock frequency, the memory controller may quiesce the memory sequencers and pace read data from the sequencers at a slower rate. To prevent write data under runs, the memory controller's data flow logic may perform handshaking to ensure that write data is completely received in the buffer before performing a write access.
    Type: Grant
    Filed: August 13, 2008
    Date of Patent: July 20, 2010
    Assignee: International Business Machines Corporation
    Inventors: Melissa Ann Barnum, Mark David Bellows, Paul Allen Ganfield, Lonny Lambrecht, Tolga Ozguner
  • Patent number: 7752393
    Abstract: A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design for forwarding store data to loads in a pipelined processor is provided. In one implementation, a processor is provided that includes a decoder operable to decode an instruction, and a plurality of execution units operable to respectively execute a decoded instruction from the decoder. The plurality of execution units include a load/store execution unit operable to execute decoded load instructions and decoded store instructions and generate corresponding load memory operations and store memory operations. The store queue is operable to buffer one or more store memory operations prior to the one or more memory operations being completed, and the store queue is operable to forward store data of the one or more store memory operations buffered in the store queue to a load memory operation on a byte-by-byte basis.
    Type: Grant
    Filed: May 4, 2008
    Date of Patent: July 6, 2010
    Assignee: International Business Machines Corporation
    Inventors: Jason A. Cox, Kevin C. K. Lin, Eric F. Robinson
  • Patent number: 7752364
    Abstract: A system controller communicates with devices in a serial interconnection. The system controller sends a read command, a device address identifying a target device in the serial interconnection and a memory location. The target device responds to the read command to read data in the location identified by the memory location. Read data is provided as an output signal that is transmitted from a last device in the serial interconnection to a data receiver of the controller. The data receiver establishes acquisition instants relating to clocks in consideration of a total flow-through latency in the serial interconnection. Where each device has a clock synchronizer, a propagated clock signal through the serial interconnection is used for establishing the acquisition instants. The read data is latched in response to the established acquisition instants in consideration of the flow-through latency, valid data is latched in the data receiver.
    Type: Grant
    Filed: November 19, 2007
    Date of Patent: July 6, 2010
    Assignee: Mosaid Technologies Incorporated
    Inventors: HakJune Oh, Hong Beom Pyeon, Jin-Ki Kim
  • Patent number: 7739477
    Abstract: Page size prediction is used to predict a page size for a page of memory being accessed by a memory access instruction such that the predicted page size can be used to access an address translation data structure. By doing so, an address translation data structure may support multiple page sizes in an efficient manner and with little additional circuitry disposed in the critical path for address translation, thereby increasing performance.
    Type: Grant
    Filed: April 10, 2007
    Date of Patent: June 15, 2010
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey Powers Bradford, Jason Nathaniel Dale, Kimberly Marie Fernsler, Timothy Hume Heil, James Allen Rose
  • Patent number: 7716444
    Abstract: A computer system includes a memory hub controller coupled to a plurality of memory modules. The memory hub controller includes a memory request queue that couples memory requests and corresponding request identifier to the memory modules. Each of the memory modules accesses memory devices based on the memory requests and generates response status signals from the request identifier when the corresponding memory request is serviced. These response status signals are coupled from the memory modules to the memory hub controller along with or separate from any read data. The memory hub controller uses the response status signal to control the coupling of memory requests to the memory modules and thereby control the number of outstanding memory requests in each of the memory modules.
    Type: Grant
    Filed: July 24, 2007
    Date of Patent: May 11, 2010
    Assignee: Round Rock Research, LLC
    Inventors: Joseph M. Jeddeloh, Terry R. Lee
  • Publication number: 20100115221
    Abstract: A system and method are described for a memory management processor which, using a table of reference addresses embedded in the object code, can open the appropriate memory pages to expedite the retrieval of information from memory referenced by instructions in the execution pipeline. A suitable compiler parses the source code and collects references to branch addresses, calls to other routines, or data references, and creates reference tables listing the addresses for these references at the beginning of each routine. These tables are received by the memory management processor as the instructions of the routine are beginning to be loaded into the execution pipeline, so that the memory management processor can begin opening memory pages where the referenced information is stored. Opening the memory pages where the referenced information is located before the instructions reach the instruction processor helps lessen memory latency delays which can greatly impede processing performance.
    Type: Application
    Filed: January 11, 2010
    Publication date: May 6, 2010
    Inventor: Dean A. Klein
  • Publication number: 20100100670
    Abstract: Memory access requests are successively received in a memory request queue of a memory controller. Any conflicts or potential delays between temporally proximate requests that would occur if the memory access requests were to be executed in the received order are detected, and the received order of the memory access requests is rearranged to avoid or minimize the conflicts or delays and to optimize the flow of data to and from the memory data bus. The memory access requests are executed in the reordered sequence, while the originally received order of the requests is tracked. After execution, data read from the memory device by the execution of the read-type memory access requests are transferred to the respective requestors in the order in which the read requests were originally received.
    Type: Application
    Filed: October 23, 2009
    Publication date: April 22, 2010
    Inventor: Joseph M. Jeddeloh
  • Patent number: 7702882
    Abstract: A lookup circuit for translating received addresses into destination addresses. The lookup circuit comprises M pipelined memory circuits for storing a trie table for translating a first received address into a first destination address. The M memory circuits are pipelined such that a first portion of the first received address accesses an address table in a first memory circuit. An output of the first memory circuit comprises a first address pointer that indexes a start of an address table in a second memory circuit. The first address pointer and a second portion of the first received address access a particular entry in the address table in the second memory circuit. An output of the second memory circuit comprises a second address pointer that indexes a start of an address table in the third memory circuit, and so forth.
    Type: Grant
    Filed: September 10, 2003
    Date of Patent: April 20, 2010
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jack C. Wybenga, Patricia K. Sturm
  • Patent number: 7702841
    Abstract: An ASIC includes a receiving unit, a transmission interface, a reception interface, a buffer, and a control unit. When the receiving unit receives a second write request while the transmission interface is in process of transmitting to a transmission line a first write request and write data, the control unit causes the receiving unit to store the second write request in the buffer. When the receiving unit receives a read request while the second write request is present in the buffer, the control unit causes the receiving unit to send the read request to the transmission interface prior to the second write request.
    Type: Grant
    Filed: March 5, 2008
    Date of Patent: April 20, 2010
    Assignee: Ricoh Company, Limited
    Inventor: Tomohiro Shima
  • Patent number: 7698525
    Abstract: A method and apparatus is disclosed for managing storage used by a processor when processing instructions in which an estimate of register usage for program procedures or functions is generated and used to control the storage of the register bank in memory.
    Type: Grant
    Filed: September 1, 2005
    Date of Patent: April 13, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Muppirala Kishore Kumar
  • Patent number: 7694099
    Abstract: A memory controller with an interface for providing a connection to a plurality of memory devices at least one of said plurality of memory devices supporting burst mode data transfers comprises data interface circuitry for connecting to a plurality of separate data buses for communicating data signals between said memory controller and a respective one of said memory devices, each of said data buses providing a dedicated data signal path to a different one of said memory devices, address interface circuitry for connecting to a common address bus for communicating address signals to each of said memory devices on a shared address signal path, address signals which are directed to different ones of said memory devices being time division multiplexed together on said common address bus, and device selecting circuitry for generating one or more device selecting signals synchronised with said time division multiplexing of said common address bus to select that memory device to which address signals currently asser
    Type: Grant
    Filed: January 16, 2007
    Date of Patent: April 6, 2010
    Inventor: Daren Croxford
  • Patent number: 7681017
    Abstract: A pseudo pipeline including a plurality of pseudo pipeline stages and a control circuit. The control circuit may be configured to control the plurality of pseudo pipeline stages to provide pseudo pipelined operation.
    Type: Grant
    Filed: July 17, 2006
    Date of Patent: March 16, 2010
    Assignee: LSI Corporation
    Inventor: Frank Worrell
  • Patent number: 7681004
    Abstract: Memory modules address the growing gap between main memory performance and disk drive performance in computational apparatus such as personal computers. Memory modules disclosed herein fill the need for substantially higher storage capacity in end-user add-in memory modules. Such memory modules accelerate the availability of applications, and data for those applications. An exemplary application of such memory modules is as a high capacity consumer memory product that can be used in Hi-Definition video recorders. In various embodiments, memory modules include a volatile memory, a non-volatile memory, and a command interpreter that includes interfaces to the memories and to various busses. The first memory acts as an accelerating buffer for the second memory, and the second memory provides non-volatile backup for the first memory. In some embodiments data transfer from the first memory to the second memory may be interrupted to provide read access to the second memory.
    Type: Grant
    Filed: June 13, 2006
    Date of Patent: March 16, 2010
    Assignee: ADDMM, LLC
    Inventors: Randy M. Bonella, Chung W. Lam
  • Patent number: 7676603
    Abstract: Systems and methods of processing write transactions provide for combining write transactions on an input/output (I/O) hub according to a protocol between the I/O hub and a processor. Data associated with the write transactions can be flushed to an I/O device without the need for proprietary software and specialized registers within the I/O device.
    Type: Grant
    Filed: April 20, 2004
    Date of Patent: March 9, 2010
    Assignee: Intel Corporation
    Inventors: Kenneth C. Creta, Aaron T. Spink, Lance E. Hacking, Sridhar Muthrasanallur, Jasmin Ajanovic
  • Publication number: 20100049936
    Abstract: A memory access control apparatus includes a plurality of memory access request generating modules and an arbitrator. When one of the memory access request generating modules receives a second memory access event while a memory device is performing a first memory access operation according to a first memory access request in response to a first memory access event, the memory access request generating module outputs a second memory access request corresponding to the second memory access event to the memory device after a delay time. The arbitrator is implemented for arbitrating memory access requests respectively outputted from the memory accessing request generating modules.
    Type: Application
    Filed: April 23, 2009
    Publication date: February 25, 2010
    Inventor: Liang-Ta Lin
  • Patent number: 7665069
    Abstract: In general, methods and apparatus for implementing a Quality of Service (QoS) model are disclosed. A Quality of Service (QoS) contract with an initiating network device may be satisfied. A request may be received from the initiating network device in a first time less than or equal to an ordinal number times an arrival interval. The ordinal number signifies a position of the request among a group of requests. The request that has been serviced may be returned to the initiator in a second time less than or equal to a constant term plus the ordinal number times a service interval.
    Type: Grant
    Filed: October 31, 2003
    Date of Patent: February 16, 2010
    Assignee: Sonics, Inc.
    Inventor: Wolf-Dietrich Weber
  • Patent number: 7657723
    Abstract: A system and method are described for a memory management processor which, using a table of reference addresses embedded in the object code, can open the appropriate memory pages to expedite the retrieval of information from memory referenced by instructions in the execution pipeline. A suitable compiler parses the source code and collects references to branch addresses, calls to other routines, or data references, and creates reference tables listing the addresses for these references at the beginning of each routine. These tables are received by the memory management processor as the instructions of the routine are beginning to be loaded into the execution pipeline, so that the memory management processor can begin opening memory pages where the referenced information is stored. Opening the memory pages where the referenced information is located before the instructions reach the instruction processor helps lessen memory latency delays which can greatly impede processing performance.
    Type: Grant
    Filed: January 28, 2009
    Date of Patent: February 2, 2010
    Assignee: Micron Technology, Inc.
    Inventor: Dean A. Klein