Interleaved Patents (Class 711/127)
  • Patent number: 8239875
    Abstract: Systems and/or methods that facilitate transferring data between a processor component and memory components are presented. A transfer controller component facilitates controlling data transfers in part by receiving respective subsets of data from respective memory components and arranging the respective subsets of data based in part on a desired predefined data order. The processor component generates a transfer map that includes information to facilitate arranging data in a predefined order. The processor component generates respective subsets of commands that are provided to queue components in respective memory components to retrieve desired data from the respective memory components. Each memory component services the commands in its queue component in an independent and parallel manner, and transfers the data retrieved from memory to the transfer controller component, which can arrange the received data in a predefined order for transfer to the processor component.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: August 7, 2012
    Assignee: Spansion LLC
    Inventors: Walter Allen, Sunil Atri, Joseph Khatami
  • Patent number: 8219761
    Abstract: A device that includes multiple processors that are connected to multiple level-one cache units. The device also includes a multi-port high-level cache unit that includes a first modular interconnect, a second modular interconnect, multiple high-level cache paths; whereas the multiple high-level cache paths comprise multiple concurrently accessible interleaved high-level cache units. Conveniently, the device also includes at least one non-cacheable path. A method for retrieving information from a cache that includes: concurrently receiving, by a first modular interconnect of a multiple-port high-level cache unit, requests to retrieve information. The method is characterized by providing information from at least two paths out of multiple high-level cache paths if at least two high-level cache hit occurs, and providing information via a second modular interconnect if a high-level cache miss occurs.
    Type: Grant
    Filed: November 17, 2005
    Date of Patent: July 10, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Ron Bercovich, Odi Dahan, Norman Goldstein, Yehuda Nowogrodski
  • Patent number: 8214597
    Abstract: An apparatus having a cache and a circuit. The cache may store old lines having old instructions. The circuit may (i) receive a first read command, (ii) fetch-ahead a new line having new instructions into a buffer sized to hold a single line, (iii) receive a second read command, (iv) present through a port a particular new instruction in response to both (a) a cache miss of the second read command and (b) a buffer hit of the second read command and (v) overwrite a particular old line with the new line in response to both (a) the cache miss of the second read command and (b) the buffer hit of the second read command such that (1) the first new line resides in all of the cache, the buffer and the memory and (2) the particular old line resides only in the memory.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: July 3, 2012
    Assignee: LSI Corporation
    Inventors: Alex Shinkar, Nahum N. Vishne
  • Patent number: 8195880
    Abstract: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides dual dispatch points into the data flow to the dual cache banks of the L2 cache memory.
    Type: Grant
    Filed: April 15, 2009
    Date of Patent: June 5, 2012
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Gai, Guy Lynn Guthrie, Hugh Shen, William John Starke
  • Patent number: 8166228
    Abstract: A non-volatile memory system and a method for reading data therefrom are provided. The data comprises a first sub-data and a second sub-data. The non-volatile memory system comprises a first storage unit and a second storage unit, adapted for storing the two sub-data respectively. The first storage unit reads a first command from the controller, and stores the first sub-data temporarily as the first temporary sub-data according to the first command. The second storage unit reads a second command from the controller, and stores the second sub-data temporarily as the second temporary sub-data according to the second command. The first temporary sub-data is read from the first storage unit. Then, the first storage unit reads a third command from the controller. The second temporary sub-data is also read from the second storage unit while reading the third command. The time for reading data from the non-volatile memory system is reduced.
    Type: Grant
    Filed: May 14, 2008
    Date of Patent: April 24, 2012
    Assignee: SkyMedi Corporation
    Inventors: Chuang Cheng, Satashi Sugawa, Chih-Wei Tsai, Wen-Lin Chang, Fu-Ja Shone
  • Patent number: 8156262
    Abstract: One or more external control pins and/or addressing pins on a memory device are used to set one or both of a burst length and burst type of the memory device.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: April 10, 2012
    Assignee: Round Rock Research, LLC
    Inventor: Christopher S. Johnson
  • Publication number: 20120079177
    Abstract: A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access).
    Type: Application
    Filed: December 5, 2011
    Publication date: March 29, 2012
    Applicant: Convey Computer
    Inventors: Tony M. BREWER, Terrell MAGEE, J. Michael ANDREWARTHA
  • Patent number: 8140765
    Abstract: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides a single dispatch point into the data flow to the dual cache banks of the L2 cache memory.
    Type: Grant
    Filed: April 15, 2009
    Date of Patent: March 20, 2012
    Assignee: International Business Machines Corporation
    Inventors: Sanjeev Gai, Guy Lynn Guthrie, Hugh Shen, William John Starke
  • Patent number: 8127069
    Abstract: Disclosed is a memory device including self-ID information. The memory device has a storage unit for storing information related to the memory device, such as a manufacturing factory, a manufacturing date, a wafer number, coordinates on a wafer and the like. Each bank of the memory device stores self-ID information related to the memory device and outputs the self-ID information out of a chip when an address is applied thereto during a test mode.
    Type: Grant
    Filed: August 27, 2007
    Date of Patent: February 28, 2012
    Assignee: Hynix Semiconductor Inc.
    Inventor: Yong Bok An
  • Patent number: 8108625
    Abstract: Concurrent threads in a multithreaded processor share access to a memory, with any location in the shared memory being accessible by any thread. In one embodiment, the shared memory has multiple independently-addressable memory banks, and one location per bank can be accessed in parallel. Parallel processing engines executing the threads generate a group of parallel memory access requests. Address conflict logic determines whether the requests can be satisfied in parallel (e.g., based on bank access constraints) and serializes the requests to the extent needed to avoid conflicts. In some embodiments, data read from one address in the shared memory can be broadcast to multiple processing engines.
    Type: Grant
    Filed: October 30, 2006
    Date of Patent: January 31, 2012
    Assignee: NVIDIA Corporation
    Inventors: Brett W. Coon, Ming Y. Siu, Weizhong Xu, Stuart F. Oberman, John R. Nickolls, Peter C. Mills
  • Patent number: 8095735
    Abstract: A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access).
    Type: Grant
    Filed: August 5, 2008
    Date of Patent: January 10, 2012
    Assignee: Convey Computer
    Inventors: Tony M. Brewer, Terrell Magee, J. Michael Andrewartha
  • Publication number: 20110320725
    Abstract: A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter, selecting one of the plurality of requests as a selected request, the selected request having been provided by a first state machine, determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache, determining that the location in the cache is unavailable, and replacing the mode with a modified mode that only includes the second step.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Michael F. Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
  • Publication number: 20110320697
    Abstract: Various embodiments of the present invention manage access to a cache memory. In or more embodiments a request for a targeted interleave within a cache memory is received. The request is associated with an operation of a given type. The target is determined to be available. The request is granted in response to the determining that the target is available. A first interleave availability table associated with a first busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. A second interleave availability table associated with a second busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: International Business Machines Corporation
    Inventors: Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill
  • Patent number: 8072463
    Abstract: A graphics system utilizes virtual memory pages and has a partitioned graphics memory that includes memory elements. The system supports having an non-power of two number of active memory elements. Additionally, a partition swizzling operation is used to adjust the partition numbers associated with individual units of virtual memory allocation on particular virtual memory pages to achieve a selected partition interleaving pattern.
    Type: Grant
    Filed: October 4, 2006
    Date of Patent: December 6, 2011
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson, John S. Montrym
  • Patent number: 8074010
    Abstract: An intelligent memory bank for use with interleaved memories storing plural vectors comprises setup apparatus (96) receives an initial address (B+C+V+NMSK) and spacing data (D) for each vector. Addressing logic (90) associates a memory cell select (C) to each initial and subsequent address of each of the plurality of vectors. Cell select apparatus (98) accesses a memory cell (in 92) using a memory cell select (C) associated to a respective one of the initial and successive addresses of each vector.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: December 6, 2011
    Assignee: Efficient Memory Technology
    Inventor: Maurice L. Hutson
  • Patent number: 8050355
    Abstract: A transmitter using pseudo-orthogonal code includes a serial-to-parallel converter for converting serial transmission data into 9-bit parallel data, and a pseudo-orthogonal code memory for receiving the parallel data from the serial-to-parallel converter and outputting 16-bit pseudo-orthogonal code by using the received data as addresses. The pseudo-orthogonal code memory has the relationship of the input address and output code, as expressed in the following equation: c(i)=0.5×((?1)b2?(i1b1)?(i0b0) (?1)b5?i2?(i1b4)?(i0b3) (?1)b8?i3?(i1b7)?(i0b6) (?1)( b2?b5?b8)?i3?i2?(i1(b1?b4?b7))?(i0(b0?b3?b6))) where C(i) is a pseudo-orthogonal code value, i is each bit of the pseudo-orthogonal code, 0?i?15, and b0-b8 are a transmission data bit stream input in the memory as addresses. Accordingly, the transmission efficiency of the transmitter/receiver using orthogonal code can be remarkably improved.
    Type: Grant
    Filed: June 11, 2008
    Date of Patent: November 1, 2011
    Assignee: Korea Electronics Technology Institute
    Inventors: Jin Woong Cho, Yong Seong Kim, Do Hun Kim, Sun Hee Kim, Dae Ki Hong
  • Patent number: 8019913
    Abstract: One or more external control pins and/or addressing pins on a memory device are used to set one or both of a burst length and burst type of the memory device.
    Type: Grant
    Filed: July 15, 2009
    Date of Patent: September 13, 2011
    Assignee: Round Rock Research, LLC
    Inventor: Christopher S. Johnson
  • Patent number: 8015358
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Grant
    Filed: September 9, 2008
    Date of Patent: September 6, 2011
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 7996646
    Abstract: In one embodiment, an apparatus comprises a queue comprising a plurality of entries and a control unit coupled to the queue. The control unit is configured to allocate a first queue entry to a store memory operation, and is configured to write a first even offset, a first even mask, a first odd offset, and a first odd mask corresponding to the store memory operation to the first entry. A group of contiguous memory locations are logically divided into alternately-addressed even and odd byte ranges. A given store memory operation writes at most one even byte range and one adjacent odd byte range. The first even offset identifies a first even byte range that is potentially written by the store memory operation, and the first odd offset identifies a first odd byte range that is potentially written by the store memory operation.
    Type: Grant
    Filed: March 10, 2010
    Date of Patent: August 9, 2011
    Assignee: Apple Inc.
    Inventors: Tse-yu Yeh, Daniel C. Murray, Po-Yung Chang, Anup S. Mehta
  • Patent number: 7984207
    Abstract: One or more external control pins and/or addressing pins on a memory device are used to set one or both of a burst length and burst type of the memory device.
    Type: Grant
    Filed: August 19, 2009
    Date of Patent: July 19, 2011
    Assignee: Round Rock Research, LLC
    Inventor: Christopher S. Johnson
  • Patent number: 7975093
    Abstract: A cache memory system and method for supporting multiple simultaneous store operations using a plurality of tag memories is provided. The cache data system further provides a plurality of multiple simultaneous cache store functions along with a single cache load function that is simultaneous with the store functions. Embodiments create a cache memory wherein the cache write buffer does not operate as a bottle neck for data store operations into a cache memory system or device.
    Type: Grant
    Filed: October 18, 2006
    Date of Patent: July 5, 2011
    Assignee: NXP B.V.
    Inventors: Jan-Willem Van De Waerdt, Carlos Basto
  • Patent number: 7884831
    Abstract: Circuits, methods, and apparatus that provide texture caches and related circuits that store and retrieve texels in a fast and efficient manner. One such texture circuit provides an increased number of bilerps for each pixel in a group of pixels, particularly when trilinear or aniso filtering is needed. For trilinear filtering, texels in a first and second level of detail are retrieved for a number of pixels during a clock cycle. When aniso filtering is performed, multiple bilerps can be retrieved for each of a number of pixels during one clock cycle.
    Type: Grant
    Filed: January 19, 2010
    Date of Patent: February 8, 2011
    Assignee: NVIDIA Corporation
    Inventors: Alexander L. Minkin, Joel J. McCormack, Paul S. Heckbert, Michael J. M. Toksvig, Luke Y. Chang, Karim Abdalla, Bo Hong, John W. Berendsen, Walter Donavan, Emmett M. Kilgariff
  • Patent number: 7877566
    Abstract: A read command protocol and a method of accessing a nonvolatile memory device having an internal cache memory. A memory device configured to accept a first and second read command, outputting a first requested data while simultaneously reading a second requested data. In addition, the memory device may be configured to send or receive a confirmation indicator.
    Type: Grant
    Filed: January 25, 2005
    Date of Patent: January 25, 2011
    Assignee: Atmel Corporation
    Inventor: Vijaya P. Adusumilli
  • Patent number: 7793048
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Grant
    Filed: September 9, 2008
    Date of Patent: September 7, 2010
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 7783834
    Abstract: A cache memory logically associates a cache line with at least two cache sectors of a cache array wherein different sectors have different output latencies and, for a load hit, selectively enables the cache sectors based on their latency to output the cache line over successive clock cycles. Larger wires having a higher transmission speed are preferably used to output the cache line corresponding to the requested memory block. In the illustrative embodiment the cache is arranged with rows and columns of the cache sectors, and a given cache line is spread across sectors in different columns, with at least one portion of the given cache line being located in a first column having a first latency, and another portion of the given cache line being located in a second column having a second latency greater than the first latency.
    Type: Grant
    Filed: November 29, 2007
    Date of Patent: August 24, 2010
    Assignee: International Business Machines Corporation
    Inventors: Leo James Clark, Guy Lynn Guthrie, Kirk Samuel Livingston, William John Starke
  • Patent number: 7779292
    Abstract: A cache coherent data processing system includes a plurality of processing units each having at least an associated cache, a system memory, and a memory controller that is coupled to and controls access to the system memory. The system memory includes a plurality of storage locations for storing a memory block of data, where each of the plurality of storage locations is sized to store a sub-block of data. The system memory further includes metadata storage for storing metadata, such as a domain indicator, describing the memory block. In response to a failure of a storage location for a particular sub-block among the plurality of sub-blocks, the memory controller overwrites at least a portion of the metadata in the metadata storage with the particular sub-block of data.
    Type: Grant
    Filed: August 10, 2007
    Date of Patent: August 17, 2010
    Assignee: International Business Machines Corporation
    Inventors: James Stephen Fields, Jr., Sanjeev Ghai, Warren Edward Maule, Jeffrey Adam Stuecheli
  • Publication number: 20100191893
    Abstract: A method and system for accessing a single port multi-way cache includes an address multiplexer that simultaneously addresses a set of data and a set of program instructions in the multi-way cache. Duplicate output way multiplexers respectively select data and program instructions read from the cache responsive to the address multiplexer.
    Type: Application
    Filed: April 14, 2010
    Publication date: July 29, 2010
    Applicant: Infineon Technologies AG
    Inventor: Klaus Oberlaender
  • Patent number: 7721066
    Abstract: In one embodiment, an apparatus comprises a queue comprising a plurality of entries and a control unit coupled to the queue. The control unit is configured to allocate a first queue entry to a store memory operation, and is configured to write a first even offset, a first even mask, a first odd offset, and a first odd mask corresponding to the store memory operation to the first entry. A group of contiguous memory locations are logically divided into alternately-addressed even and odd byte ranges. A given store memory operation writes at most one even byte range and one adjacent odd byte range. The first even offset identifies a first even byte range that is potentially written by the store memory operation, and the first odd offset identifies a first odd byte range that is potentially written by the store memory operation.
    Type: Grant
    Filed: June 5, 2007
    Date of Patent: May 18, 2010
    Assignee: Apple Inc.
    Inventors: Tse-yu Yeh, Daniel C. Murray, Po-Yung Chang, Anup S. Mehta
  • Patent number: 7698506
    Abstract: A technique for partially offloading, from a main cache in a storage server, the storage of cache tags for data blocks in a victim cache of the storage server, is described. The technique includes storing, in the main cache, a first subset of the cache tag information for each of the data blocks, and storing, in a victim cache of the storage server, a second subset of the cache tag information for each of the data blocks. This technique avoids the need to store the second subset of the cache tag information in the main cache.
    Type: Grant
    Filed: April 26, 2007
    Date of Patent: April 13, 2010
    Assignee: Network Appliance, Inc.
    Inventors: Robert L. Fair, William P. McGovern, Thomas C. Holland, Jason Sylvain
  • Publication number: 20100088460
    Abstract: Memory requests for information from a processor are received in an interface device, and the interface device is coupled to a stack including two or more memory devices. The interface device is operated to select a memory device from a number of memory devices including the stack, and to retrieve some or all of the information from the selected memory device for the processor. Additional apparatus, systems and methods are disclosed.
    Type: Application
    Filed: October 7, 2008
    Publication date: April 8, 2010
    Inventor: Joe M. Jeddeloh
  • Publication number: 20100037024
    Abstract: A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access).
    Type: Application
    Filed: August 5, 2008
    Publication date: February 11, 2010
    Applicant: Convey Computer
    Inventors: Tony M. Brewer, Terrell Magee, J. Michael Andrewartha
  • Patent number: 7650474
    Abstract: Method and system for dividing a data segment of unknown length into first and second halves, for example, for interleaving the first and second halves. Units of the data segment are written into first and second register files. With respect to the first register file, responsive to determining that the last unit of the data segment has been written into the first register file, units of the data segment in the first register file that are not units of the first half of the data segment are removed, wherein the first register file stores the first half of the data segment.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: January 19, 2010
    Assignee: LSI Corporation
    Inventors: Jackson Lloyd Ellis, Ori Ron Liav
  • Patent number: 7649538
    Abstract: Circuits, methods, and apparatus that provide texture caches and related circuits that store and retrieve texels in a fast and efficient manner. One such texture circuit provides an increased number of bilerps for each pixel in a group of pixels, particularly when trilinear or aniso filtering is needed. For trilinear filtering, texels in a first and second level of detail are retrieved for a number of pixels during a clock cycle. When aniso filtering is performed, multiple bilerps can be retrieved for each of a number of pixels during one clock cycle.
    Type: Grant
    Filed: November 3, 2006
    Date of Patent: January 19, 2010
    Assignee: NVIDIA Corporation
    Inventors: Alexander L. Minkin, Joel J. McCormack, Paul S. Heckbert, Michael J. M. Toksvig, Luke Y. Chang, Karim Abdalla, Bo Hong, John W. Berendsen, Walter Donovan, Emmett M. Kilgariff
  • Patent number: 7606994
    Abstract: In one embodiment, a cache memory system includes a cache memory coupled to a cache controller. The cache memory controller may receive an address and generate an index value corresponding to the address for accessing a particular entry within the cache memory. More particularly, the cache controller may generate the index value by performing a hash function on a first portion of the address such as an address tag, and combining a result of the hash function with a second portion of the address such as an index, for example.
    Type: Grant
    Filed: November 10, 2004
    Date of Patent: October 20, 2009
    Assignee: Sun Microsystems, Inc.
    Inventor: Robert E. Cypher
  • Patent number: 7603493
    Abstract: One or more external control pins and/or addressing pins on a memory device are used to set one or both of a burst length and burst type of the memory device.
    Type: Grant
    Filed: December 8, 2005
    Date of Patent: October 13, 2009
    Assignee: Micron Technology, Inc.
    Inventor: Christopher S. Johnson
  • Patent number: 7600163
    Abstract: An apparatus for receiving and storing an incoming sequence and for forwarding the bytes of the incoming sequence as an outgoing sequence in a different byte order includes a cache memory and a main memory for storing bytes of the incoming sequence until they can be forwarded as bytes of the outgoing sequence. A control circuit selectively burst mode writes sequences of incoming bytes that need be stored for a relatively long time to blocks of sequential addresses of the main memory, writes individual bytes of the incoming sequence that need be stored for a relatively short time to selected addresses of the cache memory, and reads bytes out of the cache memory and the main memory when needed to form the outgoing sequence.
    Type: Grant
    Filed: September 23, 2003
    Date of Patent: October 6, 2009
    Assignee: Realtek Semiconductor Corp.
    Inventors: Bin Wan, Chia-Liang Lin
  • Publication number: 20090172286
    Abstract: A method and system for balancing host write operations and cache flushing is disclosed. The method may include steps of determining an available capacity in a cache storage portion of a self-caching storage device, determining a ratio of cache flushing steps to host write commands if the available capacity is below a desired threshold and interleaving cache flushing steps with host write commands to achieve the ratio. The cache flushing steps may be executed by maintaining a storage device busy status after executing a host write command and utilizing this additional time to copy a portion of the data from the cache storage into the main storage. The system may include a cache storage, a main storage and a controller configured to determine and execute a ratio of cache flushing steps to host write commands by executing cache flushing steps while maintaining a busy status after a host write command.
    Type: Application
    Filed: December 31, 2007
    Publication date: July 2, 2009
    Inventors: Menahem Lasser, Itshak Afriat, Opher Lieber, Mark Shlick
  • Patent number: 7552292
    Abstract: A method is disclosed for utilizing at least one bit within the logical address code of a memory unit formed by Dynamic Random Access Memory (DRAM) to be the control code for interleaving the memory space to different memory ranks. First, the distributive rule of the data is defined. Next, the data is distributed to the memory ranks that the data belongs to according to the rule. Then, the data is physically accessed in one of the memory ranks.
    Type: Grant
    Filed: March 17, 2005
    Date of Patent: June 23, 2009
    Assignee: Via Technologies, Inc.
    Inventors: Bo-Wei Hsieh, Ming-Shi Liou
  • Patent number: 7549022
    Abstract: Avoiding cache-line sharing in virtual machines can be implemented in a system running a host and multiple guest operating systems. The host facilitates hardware access by a guest operating system and oversees memory access by the guest. Because cache lines are associated with memory pages that are spaced at regular intervals, the host can direct guest memory access to only select memory pages, and thereby restrict guest cache use to one or more cache lines. Other guests can be restricted to different cache lines by directing memory access to a separate set of memory pages.
    Type: Grant
    Filed: July 21, 2006
    Date of Patent: June 16, 2009
    Assignee: Microsoft Corporation
    Inventor: Brandon S. Baker
  • Patent number: 7546417
    Abstract: A method of accessing data from a cache is disclosed. Tag bits of data among sets and ways of cache lines are divided into common subtags and remaining subtags. Similarly, an access address tag is divided into an address common subtag and address remaining tag. When the index of an access address selects a set, a match comparison of the address common subtag and the selected set common subtag is performed. Also, the address remaining tag and selected set remaining subtags are compared for matching before the selected set and associated data is supplied to the requester.
    Type: Grant
    Filed: July 15, 2008
    Date of Patent: June 9, 2009
    Assignee: International Business Machines Corporation
    Inventors: Ramakrishnan Rajamony, William Evan Speight, Lixin Zhang
  • Patent number: 7493457
    Abstract: To store N bits of M?2 logical pages, the bits are interleaved and the interleaved bits are programmed to ?N/M? memory cells, M bits per cell. Preferably, the interleaving puts the same number of bits from each logical page into each bit-page of the ?N/M? cells. When the bits are read from the cells, the bits are de-interleaved. The interleaving may be deterministic or random, and may be effected by software or by dedicated hardware.
    Type: Grant
    Filed: March 14, 2005
    Date of Patent: February 17, 2009
    Assignee: SanDisk IL. Ltd
    Inventor: Mark Murin
  • Patent number: 7475210
    Abstract: An address processing section allocates addresses of desired data in a main memory, input from a control block, to any of three hit determination sections based on the type of the data. If the hit determination sections determine that the data stored in the allocated addresses does not exist in the corresponding cache memories, request issuing sections issue transfer requests for the data from the main memory to the cache memories, to a request arbitration section. The request arbitration section transmits the transfer requests to the main memory with priority given to data of greater sizes to transfer. The main memory transfers data to the cache memories in accordance with the transfer requests. A data synchronization section reads a plurality of read units of data from a plurality of cache memories, and generates a data stream for output by a stream sending section.
    Type: Grant
    Filed: December 19, 2005
    Date of Patent: January 6, 2009
    Assignee: Sony Computer Entertainment Inc.
    Inventor: Hideshi Yamada
  • Patent number: 7475192
    Abstract: An N-set associative cache organization is disclosed. The cache organization comprises a plurality of SRAMs, wherein the data within the SRAMs such that a first 1/N of a plurality of cache lines is within a first portion of the plurality of SRAMs and last 1/N portion of the plurality of cache lines is within a last portion plurality of SRAMs. By using this method for organizing the caches, power can be reduced. Given an N-way set associative cache, in this method provides up to 1/N power reduction in the data portions of the SRAMs.
    Type: Grant
    Filed: July 12, 2005
    Date of Patent: January 6, 2009
    Assignee: International Business Machines Corporation
    Inventors: Anthony Correale, Jr., Robert Lewis Goldiez
  • Patent number: 7469318
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. The first data bus can be one of a plurality of data busses in a first data bus set, and the second data bus can be one of a plurality of data busses in a second data bus set.
    Type: Grant
    Filed: February 10, 2005
    Date of Patent: December 23, 2008
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 7467323
    Abstract: A cache coherent data processing system includes a plurality of processing units each having at least an associated cache, a system memory, and a memory controller that is coupled to and controls access to the system memory. The system memory includes a plurality of storage locations for storing a memory block of data, where each of the plurality of storage locations is sized to store a sub-block of data. The system memory further includes metadata storage for storing metadata, such as a domain indicator, describing the memory block. In response to a failure of a storage location for a particular sub-block among the plurality of sub-blocks, the memory controller overwrites at least a portion of the metadata in the metadata storage with the particular sub-block of data.
    Type: Grant
    Filed: February 10, 2005
    Date of Patent: December 16, 2008
    Assignee: International Business Machines Corporation
    Inventors: James Stephen Fields, Jr., Sanjeev Ghai, Warren Edward Maule, Jeffrey Adam Stuecheli
  • Publication number: 20080301370
    Abstract: A memory module includes a module circuit board, an amplifier circuit disposed on the module circuit board for amplifying an input signal, and a memory component to store a data item, wherein the memory component is disposed on the module circuit board. The amplifier circuit includes an input to receive a data signal and an output to provide an amplified data signal. The memory component comprises an input to receive the amplified data signal, wherein the data item is stored in the memory component in dependence on a level of the received amplified data signal.
    Type: Application
    Filed: June 4, 2007
    Publication date: December 4, 2008
    Applicant: QIMONDA AG
    Inventors: Srdjan Djordjevic, Hermann Ruckerbauer, Maurizio Skerlj, Christian Mueller
  • Patent number: 7451248
    Abstract: A method and apparatus for invalidating cache lines during direct memory access (DMA) write operations are disclosed. Initially, a multi-cache line DMA request is issued by a peripheral device. The multi-cache line DMA request is snooped by a cache memory. A determination is then made as to whether or not the cache memory includes a copy of data stored in the system memory locations to which the multi-cache line DMA request are directed. In response to a determination that the cache memory includes a copy of data stored in the system memory locations to which the multi-cache line DMA request are directed, multiple cache lines within the cache memory are consecutively invalidated.
    Type: Grant
    Filed: February 9, 2005
    Date of Patent: November 11, 2008
    Assignee: International Business Machines Corporation
    Inventors: George W. Daly, Jr., James S. Fields, Jr.
  • Patent number: 7433429
    Abstract: In one embodiment, interleaved signals in a receiver are accessed by memory pointers and delivered to data stream locations without the need to transfer data to an intermediate physical buffer.
    Type: Grant
    Filed: July 19, 2002
    Date of Patent: October 7, 2008
    Assignee: Intel Corporation
    Inventor: Amit Dagan
  • Patent number: 7386691
    Abstract: An electronic device may include a source memory device partitioned into N elementary source memories for storing a sequence of input data sets, and a processor clocked by a clock signal and having N outputs for producing, per cycle of the clock signal, N output data sets respectively associated with the N input data sets stored in the N elementary source memories at respective source addresses. The electronic device may also include N single port target memories, N interleaving tables including, for each relative source address, the number of a target memory and the respective target address thereof, N cells connected in a ring structure. Further, each cell may also be connected between an output of the processor, an interleaving table, and a target memory.
    Type: Grant
    Filed: April 13, 2005
    Date of Patent: June 10, 2008
    Assignees: STMicroelectronics N.V., STMicroelectronics SA
    Inventors: Friedbert Berens, Michael J. Thul, Franck Gilbert, Norbert Wehn
  • Publication number: 20080120466
    Abstract: A method and system for accessing a single port multi-way cache includes an address multiplexer that simultaneously addresses a set of data and a set of program instructions in the multi-way cache. Duplicate output way multiplexers respectively select data and program instructions read from the cache responsive to the address multiplexer.
    Type: Application
    Filed: November 20, 2006
    Publication date: May 22, 2008
    Inventor: Klaus Oberlaender