Partitioned Cache, E.g., Separate Instruction And Operand Caches, Etc. (epo) Patents (Class 711/E12.046)
  • Patent number: 11275589
    Abstract: A processor interacts with a memory set including a cache memory, a first memory storing at least a first piece of information in a first information group, and a second memory storing at least a second piece of information in a second information group. In response to a first cache miss and following a first request from the processor for the first piece of information, the first piece of information obtained from the first memory is supplied to the processor. After a second request from the processor for the second piece of information, the second piece of information obtained from the second memory is supplied to the processor, even if the first information group is currently being transferred from the first memory for loading into the cache memory.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: March 15, 2022
    Assignees: STMicroelectronics (Rousset) SAS, STMicroelectronics (Grenoble 2) SAS
    Inventors: Sebastien Metzger, Silvia Brini
  • Patent number: 10606599
    Abstract: A system and method for using an operation (op) cache is disclosed. The system and method include an op cache for caching previously decoded instructions. The op cache includes a plurality of physically indexed and tagged instructions allowing sharing of instructions between threads. The op cache is chained through multiple ways allowing service of a plurality of instructions in a cache line. The op cache is stored between a shared operation storage and immediate/displacement storage to maximize capacity.
    Type: Grant
    Filed: December 9, 2016
    Date of Patent: March 31, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: David N. Suggs
  • Patent number: 9021188
    Abstract: A first portion of an asymmetric memory is configured as temporary storage for application data units with sizes corresponding to a small memory block that is smaller than the size of a logical write unit associated with the asymmetric memory. A portion of the remaining asymmetric memory is configured as a reconciled storage for application data units with varying sizes. A first application data unit is received for writing to the asymmetric memory. Based on computing the size of the first application data unit as corresponding to the small memory block, the first application data unit is written to the temporary storage. Upon determining that a threshold is reached, a memory write operation is performed for writing the application data units from the temporary storage to the reconciled storage. The application data units written to the reconciled storage are removed from the temporary storage.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 28, 2015
    Assignee: Virident Systems Inc.
    Inventors: Vijay Karamcheti, Ashish Singhai, Shibabrata Mondal, Swamy Gowda
  • Patent number: 8924683
    Abstract: The relay unit splits the storage area in the buffer into a plurality of partitioned areas, manages the same and, upon receiving a read request from the access request source, selects and allocates one or more from the plurality of partitioned areas and, on condition that the relevant partitioned areas are allocated, transmits the read request to the memory control unit, wherein the memory control unit reads the data requested in the received read request from the memory, splits the data which is read into a plurality of units, and transmits the same to the relay unit, wherein the relay unit stores each of the data transmitted from the memory control unit in each of the allocated partitioned areas sequentially, on condition that all of the data is stored, reads each of the data from each of the allocated partitioned areas, compiles each of the data which is read into one, transmits the same as read data to the access request source, and releases all of the respective allocated partitioned areas.
    Type: Grant
    Filed: April 21, 2011
    Date of Patent: December 30, 2014
    Assignee: Hitachi, Ltd.
    Inventor: Shuntaro Seno
  • Patent number: 8909867
    Abstract: The present invention provides a method and apparatus for allocating space in a unified cache. The method may include partitioning the unified cache into a first portion of lines that only store copies of instructions retrieved from a memory and a second portion of lines that only store copies of data retrieved from the memory.
    Type: Grant
    Filed: August 24, 2010
    Date of Patent: December 9, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventor: William L. Walker
  • Patent number: 8904113
    Abstract: Techniques, systems and an article of manufacture for caching in a virtualized computing environment. A method includes enforcing a host page cache on a host physical machine to store only base image data, and enforcing each of at least one guest page cache on a corresponding guest virtual machine to store only data generated by the guest virtual machine after the guest virtual machine is launched, wherein each guest virtual machine is implemented on the host physical machine.
    Type: Grant
    Filed: May 24, 2012
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Han Chen, Hui Lei, Zhe Zhang
  • Patent number: 8862828
    Abstract: Method and apparatus to efficiently store and cache data. Cores of a processor and cache slices co-located with the cores may be grouped into a cluster. A memory space may be partitioned into address regions. The cluster may be associated with an address region from the address regions. Each memory address of the address region may be mapped to one or more of the cache slices grouped into the cluster. A cache access from one or more of the cores grouped into the cluster may be biased to the address region based on the association of the cluster with the address region.
    Type: Grant
    Filed: August 13, 2012
    Date of Patent: October 14, 2014
    Assignee: Intel Corporation
    Inventors: Ravindra P. Saraf, Rahul Pal, Ashok Jagannathan
  • Patent number: 8738859
    Abstract: Hybrid caching techniques and garbage collection using hybrid caching techniques are provided. A determination of a measure of a characteristic of a data object is performed, the characteristic being indicative of an access pattern associated with the data object. A selection of one caching structure, from a plurality of caching structures, is performed in which to store the data object based on the measure of the characteristic. Each individual caching structure in the plurality of caching structures stores data objects has a similar measure of the characteristic with regard to each of the other data objects in that individual caching structure. The data object is stored in the selected caching structure and at least one processing operation is performed on the data object stored in the selected caching structure.
    Type: Grant
    Filed: September 13, 2012
    Date of Patent: May 27, 2014
    Assignee: International Business Machines Corporation
    Inventors: Chen-Yong Cher, Michael K. Gschwind
  • Publication number: 20140082290
    Abstract: A mechanism is provided in a data processing system for enhancing wiring structure for a cache supporting an auxiliary data output. The mechanism splits the data cache into a first data portion and a second data portion. The first data portion provides a first set of data elements and the second data portion provides a second set of data elements. The mechanism connects a first data path to provide the first set of data elements to a primary output and connects a second data path to provide the second set of data elements to the primary output. The mechanism feeds the first data path back into the second data path and feeds the second data path back into the first data path. The mechanism connects a secondary output to the second data path.
    Type: Application
    Filed: September 17, 2012
    Publication date: March 20, 2014
    Applicant: International Business Machines Corporation
    Inventors: Christian Habermann, Walter Lipponer, Martin Recktenwald, Hans-Werner Tast
  • Patent number: 8661200
    Abstract: Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.
    Type: Grant
    Filed: February 5, 2010
    Date of Patent: February 25, 2014
    Assignee: Nokia Corporation
    Inventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
  • Patent number: 8639884
    Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: January 28, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventor: Thang M. Tran
  • Patent number: 8631198
    Abstract: An interface controller of a storage device configured to manage a write cache of the storage device responsive to changes in a voltage supply provided to the storage device. In one implementation, the interface controller reduces the size of the write cache responsive to the voltage supply dropping at or below a first threshold. The interface controller further disables write permissions to the write cache responsive the voltage supply dropping at or below a second threshold, wherein the second threshold is lower in magnitude that the first threshold. The interface controller periodically receives the voltage supply responsive to transmitting sequential requests to a servo firmware of the storage device.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: January 14, 2014
    Assignee: Seagate Technology LLC
    Inventors: Choon Wei Ng, Chee Meng Leong, Poh Guat Bay, June Christian Ang, Kian Keong Ooi, Wei Kin Wan
  • Publication number: 20140006715
    Abstract: Method and apparatus to efficiently store and cache data. Cores of a processor and cache slices co-located with the cores may be grouped into a cluster. A memory space may be partitioned into address regions. The cluster may be associated with an address region from the address regions. Each memory address of the address region may be mapped to one or more of the cache slices grouped into the cluster. A cache access from one or more of the cores grouped into the cluster may be biased to the address region based on the association of the cluster with the address region.
    Type: Application
    Filed: August 13, 2012
    Publication date: January 2, 2014
    Applicant: INTEL CORPORATION
    Inventors: Ravindra P. Saraf, Rahul Pal, Ashok Jagannathan
  • Publication number: 20130332676
    Abstract: In a cloud computing environment, a cache and a memory are partitioned into “colors”. The colors of the cache and the memory are allocated to virtual machines independently of one another. In order to provide cache isolation while allocating the memory and cache in different proportions, some of the colors of the memory are allocated to a virtual machine, but the virtual machine is not permitted to directly access these colors. Instead, when a request is received from the virtual machine for a memory page in one of the non-accessible colors, a hypervisor swaps the requested memory page with a memory page with a color that the virtual machine is permitted to access. The virtual machine is then permitted to access the requested memory page at the new color location.
    Type: Application
    Filed: June 12, 2012
    Publication date: December 12, 2013
    Applicant: Microsoft Corporation
    Inventors: Ramakrishna R. Kotla, Venugopalan Ramasubramanian
  • Patent number: 8607000
    Abstract: This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line.
    Type: Grant
    Filed: September 23, 2011
    Date of Patent: December 10, 2013
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Roger Kyle Castille, Joseph Raymond Michael Zbiciak, Dheera Balasubramanian
  • Patent number: 8595425
    Abstract: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.
    Type: Grant
    Filed: September 25, 2009
    Date of Patent: November 26, 2013
    Assignee: NVIDIA Corporation
    Inventors: Alexander L. Minkin, Steven James Heinrich, RaJeshwaran Selvanesan, Brett W. Coon, Charles McCarver, Anjana Rajendran, Stewart G. Carlton
  • Publication number: 20130246710
    Abstract: A storage system is provided with a plurality of physical storage devices, a cache memory, a control device that is coupled to the plurality of physical storage devices and the cache memory, and a buffer part. The buffer part is a storage region that is formed by using at least a part of a storage region of the plurality of physical storage devices and that is configured to temporarily store at least one target data element that is to be transmitted to a predetermined target. The control device stores a target data element into a cache region that has been allocated to a buffer region (that is a part of the cache memory and that is a storage region of a write destination of the target data element for the buffer part). The control device transmits the target data element from the cache memory.
    Type: Application
    Filed: March 15, 2012
    Publication date: September 19, 2013
    Inventor: Akira Deguchi
  • Patent number: 8539192
    Abstract: A method, system and computer program product for storing data in memory. An example system includes at least one multistage application configured to generate intermediate data in a generating stage of the application and consume the intermediate data in a subsequent consuming stage of the application. A runtime profiler is configured to monitor the application's execution and dynamically allocate memory to the application from an in-memory data grid.
    Type: Grant
    Filed: January 8, 2010
    Date of Patent: September 17, 2013
    Assignee: International Business Machines Corporation
    Inventors: Claris Castillo, Michael J. Spreitzer, Malgorzata Steinder, Ian N. Whalley
  • Patent number: 8484405
    Abstract: Techniques are disclosed for managing memory within a virtualized system that includes a memory compression cache. Generally, the virtualized system may include a hypervisor configured to use a compression cache to temporarily store memory pages that have been compressed to conserve memory space. A “first-in touch-out” (FITO) list may be used to manage the size of the compression cache by monitoring the compressed memory pages in the compression cache. Each element in the FITO list corresponds to a compressed page in the compression cache. Each element in the FITO list records a time at which the corresponding compressed page was stored in the compression cache (i.e. an age). A size of the compression cache may be adjusted based on the ages of the pages in the compression cache.
    Type: Grant
    Filed: July 13, 2011
    Date of Patent: July 9, 2013
    Assignee: VMware, Inc.
    Inventors: Ali Mashtizadeh, Irfan Ahmad
  • Patent number: 8458439
    Abstract: A processor has an associated memory hierarchy including a cache memory. The processor includes an instruction sequencing unit that fetches instructions for processing, an operand data structure including a plurality of entries corresponding to operands of operations to be performed by the processor, and a computation engine. A first entry among the plurality of entries in the operand data structure specifies a first caching policy for a first operand, and a second entry specifies a second caching policy for a second operand. The computation engine computes and stores operands in the memory hierarchy in accordance with the cache policies indicated within the operand data structure.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: June 4, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy
  • Publication number: 20130138890
    Abstract: A method for performing dynamic configuration includes: freezing a bus between a dynamic configurable cache and a plurality of cores/processors by rejecting a request from any of the cores/processors during a bus freeze period, wherein the dynamic configurable cache is implemented with an on-chip memory; and adjusting a size of a portion of the dynamic configurable cache, wherein the portion of the dynamic configurable cache is capable of caching/storing information for one of the cores/processors. An associated apparatus is also provided. In particular, the apparatus includes the plurality of cores/processors, the dynamic configurable cache, and a dynamic configurable cache controller, and can operate according to the method.
    Type: Application
    Filed: February 24, 2012
    Publication date: May 30, 2013
    Inventor: You-Ming Tsao
  • Patent number: 8443162
    Abstract: Techniques for controllably allocating a portion of a plurality of memory banks as cache memory are disclosed. To this end, a configuration tracker and a bank selector are employed. The configuration tracker configures whether each memory bank is to operate in a cache or not. The bank selector has a plurality of bank distributing functions. Upon receiving an incoming address, the bank selector determines the configuration of memory banks currently operating as the cache and applies an appropriate bank distributing function based on the configuration of memory banks. The applied bank distributing function utilizes bits in the incoming address to access one of the banks configured as being in the cache.
    Type: Grant
    Filed: January 21, 2005
    Date of Patent: May 14, 2013
    Assignee: QUALCOMM Incorporated
    Inventors: Thomas Philip Speier, James Norris Dieffenderfer, Ravi Rajagopalan
  • Publication number: 20130031310
    Abstract: A computer system includes: a main storage unit, a processing executing unit sequentially executing processing to be executed on virtual processors; a level-1 cache memory shared among the virtual processors; a level-2 cache memory including storage areas partitioned based on the number of the virtual processors, the storage areas each (i) corresponding to one of the virtual processors and (ii) holding the data to be used by the corresponding one of the virtual processors; a context memory holding a context item corresponding to the virtual processor; a virtual processor control unit saving and restoring a context item of one of the virtual processors; a level-1 cache control unit; and a level-2 cache control unit.
    Type: Application
    Filed: October 4, 2012
    Publication date: January 31, 2013
    Applicant: PANASONIC CORPORATION
    Inventor: Panasonic Corporation
  • Publication number: 20130024621
    Abstract: The present invention relates to a coarse-grained reconfigurable array, comprising: at least one processor; a processing element array including a plurality of processing elements, and a configuration cache where commands being executed by the processing elements are saved; and a plurality of memory units forming a one-to-one mapping with the processor and the processing element array. The coarse-grained reconfigurable array further comprises a central memory performing data communications between the processor and the processing element array by switching the one-to-one mapping such that when the processor transfers data from/to a main memory to/from a frame buffer, a significant bottleneck phenomenon that may occur due to the limited bandwidth and latency of a system bus can be improved.
    Type: Application
    Filed: June 1, 2010
    Publication date: January 24, 2013
    Applicant: SNU R & DB Foundation
    Inventors: Ki Young Choi, Kyung Wook Chang, Jong Kyung Paek
  • Publication number: 20130007370
    Abstract: Implementations of the present disclosure involve an apparatus and/or method for allocating, dividing and accessing memory of a multi-threaded computing system based at least in part on the structural hierarchy of the components of the computing system. Allocating partitions of memory based on the hierarchy structure of the computing system may isolate the threads of the computing system such that cache-memory contention by a plurality of executing threads may be reduced. In general, the apparatus and/or method may analyze the hierarchal structure of the components of the computing system utilized in the execution of applications and divide the available memory of the system between the various components. This division of the system memory creates exclusive partitions in the caches of the computing system based on the processor and cache hierarchy. The partitions may be used by different applications or by different sections of the same application to store accessed memory in cache for quick retrieval.
    Type: Application
    Filed: July 1, 2011
    Publication date: January 3, 2013
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Alok Parikh, Amandeep Singh
  • Publication number: 20130007341
    Abstract: In some embodiments, a non-volatile cache memory may include a segmented non-volatile cache memory configured to be located between a system memory and a mass storage device of an electronic system and a controller coupled to the segmented non-volatile cache memory, wherein the controller is configured to control utilization of the segmented non-volatile cache memory. The segmented non-volatile cache memory may include a file cache segment, the file cache segment to store complete files in accordance with a file cache policy, and a block cache segment, the block cache segment to store one or more blocks of one or more files in accordance with a block cache policy, wherein the block cache policy is different from the file cache policy.
    Type: Application
    Filed: June 26, 2012
    Publication date: January 3, 2013
    Inventors: Dale Juenemann, R. Scott Tetrick, Oscar Pinto
  • Patent number: 8312219
    Abstract: Hybrid caching techniques and garbage collection using hybrid caching techniques are provided. A determination of a measure of a characteristic of a data object is performed, the characteristic being indicative of an access pattern associated with the data object. A selection of one caching structure, from a plurality of caching structures, is performed in which to store the data object based on the measure of the characteristic. Each individual caching structure in the plurality of caching structures stores data objects has a similar measure of the characteristic with regard to each of the other data objects in that individual caching structure. The data object is stored in the selected caching structure and at least one processing operation is performed on the data object stored in the selected caching structure.
    Type: Grant
    Filed: March 2, 2009
    Date of Patent: November 13, 2012
    Assignee: International Business Machines Corporation
    Inventors: Chen-Yong Cher, Michael K. Gschwind
  • Publication number: 20120278556
    Abstract: Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory.
    Type: Application
    Filed: July 5, 2012
    Publication date: November 1, 2012
    Inventor: Hideyuki KOSEKI
  • Patent number: 8291176
    Abstract: The disclosed embodiments may relate to protection domain group, which may include a memory region associated with a process. The protection domain group may also include a plurality of memory windows associated with the memory region. Also included may be a plurality of protection domains, each of which may correspond to a memory window. The protection domains may allow access to the memory region via a corresponding memory window.
    Type: Grant
    Filed: March 27, 2003
    Date of Patent: October 16, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jeffrey Hilland, David J. Garcia
  • Patent number: 8285971
    Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, an instruction sequencing unit that fetches instructions for execution by the at least one execution unit, and an address generation accelerator. The address generation accelerator, responsive to an initiation signal received from the instruction sequencing unit, computes and outputs first and second effective addresses of operands of an operation.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: October 9, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy
  • Patent number: 8281106
    Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: October 2, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy
  • Patent number: 8275969
    Abstract: A data storage area of a data storage device is partitioned logically between a user storage area and a device storage area. Source data stored securely in the device storage area is copied as derivative data to the user storage area, or is used as a basis for creating derivative data stored in the user storage area, whenever the data storage device is initialized. In one embodiment, the data storage area is read-write and the device storage area has embodied thereon device system code, executed by a controller of the data storage device, for writing source data to the device storage area only if the source data satisfies a predetermined condition. Examples of derivative data include an autorun file, a volume label and user identification. Data from a host may be stored reversibly in the user storage area but must be stored securely in the device storage area.
    Type: Grant
    Filed: February 22, 2005
    Date of Patent: September 25, 2012
    Assignee: Sandisk IL Ltd.
    Inventors: Dov Moran, Eyal Bychkov
  • Publication number: 20120226869
    Abstract: When a storage capacity of a file server is expanded using an online storage service, elimination of an upper-limit constraint of the file size as a constraint of the online storage service and reduction in the communication cost are realized. A kernel module including logical volumes on the online storage service divides a file into block files at a fixed length and stores and manages the block files to prevent the upper-limit constraint of the file size. When a READ/WRITE request is generated for a mounted file system, only necessary block files are downloaded and used from the online storage service based on an offset value and size information to optimize the communication and realize the communication cost reduction.
    Type: Application
    Filed: March 31, 2010
    Publication date: September 6, 2012
    Applicant: HITACHI SOLUTIONS, LTD.
    Inventors: Yasuhiro Kirihata, Hideyuki Kashiwase
  • Publication number: 20120215969
    Abstract: According to one embodiment, a storage device includes: a backup unit configured to perform backup at the time of a power shutoff; a storage module configured to store information; a cache memory configured to perform caching of the storage module; and a controller configured to adjust a size of a cache to the cache memory according to exhaustion of the backup unit.
    Type: Application
    Filed: February 22, 2012
    Publication date: August 23, 2012
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Masaaki Tamura, Masakazu Tsuruoka
  • Patent number: 8250305
    Abstract: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.
    Type: Grant
    Filed: March 19, 2008
    Date of Patent: August 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Gary E. Strait, Deanna P. Dunn, Michael F. Fee, Pak-kin Mak, Robert J. Sonnelitter, III
  • Patent number: 8230196
    Abstract: Example embodiments for configuring a non-volatile memory device may comprise configuring M physical partitions of the non-volatile memory into two or more banks, wherein the two or more banks respectively comprise one or more of the M physical partitions, and wherein at least a first of the M physical partitions comprises a first size and wherein at least a second of the M physical partitions comprises a second size.
    Type: Grant
    Filed: May 28, 2009
    Date of Patent: July 24, 2012
    Assignee: Micron Technology, Inc.
    Inventors: Emanuele Confalonieri, Corrado Villa
  • Patent number: 8219757
    Abstract: In some embodiments, a processor-based system includes a processor, a system memory coupled to the processor, a mass storage device, a cache memory located between the system memory and the mass storage device, and code stored on the processor-based system to cause the processor-based system to utilize the cache memory. The code may be configured to cause the processor-based system to preferentially use only a selected size of the cache memory to store cache entries having less than or equal to a selected number of cache hits. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 30, 2008
    Date of Patent: July 10, 2012
    Assignee: Intel Corporation
    Inventors: Glenn Hinton, Dale Juenemann, R. Scott Tetrick
  • Publication number: 20120166730
    Abstract: The present invention is directed to a circuit for managing data movement between an interface supporting the PLB6 bus protocol, an interface supporting the AMBA AXI bus protocol, and internal data arrays of a cache controller and/or on-chip memory peripheral. The circuit implements register file buffers for gathering data to bridge differences between the bus protocols and bus widths in a manner which addresses latency and performance concerns of the overall system.
    Type: Application
    Filed: December 27, 2010
    Publication date: June 28, 2012
    Applicant: LSI CORPORATION
    Inventors: Judy M. Gehman, Jerome M. Meyer
  • Publication number: 20120166723
    Abstract: An embodiment of this invention divides a cache memory of a storage system into a plurality of partitions and information in one or more of the partitions is composed of data different from user data and including control information. The storage system dynamically swaps data between an LU storing control information and a cache partition. Through this configuration, in a storage system having an upper limit in the capacity of the cache memory, a large amount of control information can be used while access performance to control information is kept.
    Type: Application
    Filed: December 27, 2010
    Publication date: June 28, 2012
    Inventors: Akihiko Araki, Yusuke Nonaka
  • Patent number: 8190823
    Abstract: An apparatus, system, and method are disclosed for deduplicating storage cache data. A storage cache partition table has at least one entry associating a specified storage address range with one or more specified storage partitions. A deduplication module creates an entry in the storage cache partition table wherein the specified storage partitions contain identical data to one another within the specified storage address range thus requiring only one copy of the identical data to be cached in a storage cache. A read module accepts a storage address within a storage partition of a storage subsystem, to locate an entry wherein the specified storage address range contains the storage address, and to determine whether the storage partition is among the one or more specified storage partitions if such an entry is found.
    Type: Grant
    Filed: September 18, 2008
    Date of Patent: May 29, 2012
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Rod D. Waltermann, Mark Charles Davis
  • Publication number: 20120054442
    Abstract: The present invention provides a method and apparatus for allocating space in a unified cache. The method may include partitioning the unified cache into a first portion of lines that only store copies of instructions retrieved from a memory and a second portion of lines that only store copies of data retrieved from the memory.
    Type: Application
    Filed: August 24, 2010
    Publication date: March 1, 2012
    Inventor: William L. Walker
  • Patent number: 8108609
    Abstract: A hardware description language (HDL) design structure embodied on a machine-readable data storage medium includes elements that when processed in a computer aided design system generates a machine executable representation of a device for implementing dynamic refresh protocols for DRAM based cache. The HDL design structure further includes a DRAM cache partitioned into a refreshable portion and a non-refreshable portion; and a cache controller configured to assign incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines; wherein cache lines corresponding to data having a usage history below a defined frequency are assigned by the controller to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
    Type: Grant
    Filed: May 23, 2008
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: John E. Barth, Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Publication number: 20110320687
    Abstract: Embodiments of the invention are directed to reducing write amplification in a cache with flash memory used as a write cache. An embodiment of the invention includes partitioning at least one flash memory device in the cache into a plurality of logical partitions. Each of the plurality of logical partitions is a logical subdivision of one of the at least one flash memory device and comprises a plurality of memory pages. Data are buffered in a buffer. The data includes data to be cached, and data to be destaged from the cache to a storage subsystem. Data to be cached are written from the buffer to the at least one flash memory device. A processor coupled to the buffer is provided with access to the data written to the at least one flash memory device from the buffer, and a location of the data written to the at least one flash memory device within the plurality of logical partitions. The data written to the at least one flash memory device are destaged from the buffer to the storage subsystem.
    Type: Application
    Filed: June 29, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wendy A. Belluomini, Binny S. Gill, Michael A. Ko
  • Patent number: 7970997
    Abstract: A program section layout method capable of improving space efficiency of a cache memory. A grouping unit groups program sections into section groups so that the total size of the program sections composing each section group does not exceed cache memory size. A layout optimization unit optimizes the layout of section group storage regions by combining each section group and a program section that does not belong to any section groups or by combining section groups while keeping the ordering relations of the program sections composing each section group.
    Type: Grant
    Filed: December 23, 2004
    Date of Patent: June 28, 2011
    Assignee: Fujitsu Semiconductor Limited
    Inventor: Manabu Watanabe
  • Patent number: 7941585
    Abstract: A RISC-type processor includes a main register file and a data cache. The data cache can be partitioned to include a local memory, the size of which can be dynamically changed on a cache block basis while the processor is executing instructions that use the main register file. The local memory can emulate as an additional register file to the processor and can reside at a virtual address. The local memory can be further partitioned for prefetching data from a non-cacheable address to be stored/loaded into the main register file.
    Type: Grant
    Filed: December 17, 2004
    Date of Patent: May 10, 2011
    Assignee: Cavium Networks, Inc.
    Inventors: David H. Asher, David A. Carlson, Richard E. Kessler
  • Patent number: 7877547
    Abstract: A system, method and circuit for efficiently managing a cache storage device. A cache storage device may include a cache management module. The cache management module may be adapted to generate a management unit and to associate the management unit with new data that is to be written into the cache. The cache management module may be further adapted to assign two or more allocation units for each management unit, to store the new data in the cache. A cache management module may include a management unit module. The management unit module may be adapted to generate management units associated with predefined global cache management functions. The cache management module may further include an allocation unit module in communication with the management unit module. The allocation unit module may be adapted to assign allocation units for storing data written into the cache.
    Type: Grant
    Filed: May 6, 2005
    Date of Patent: January 25, 2011
    Assignee: International Business Machines Corporation
    Inventors: Ofir Zohar, Yaron Revah, Haim Helman, Dror Cohen, Shemer Schwartz
  • Patent number: 7831797
    Abstract: A memory system is functionally designed so that, despite operation without an error correction device, memory chips of a memory module that are actually provided for error correction are concomitantly used for the data transfer. A control device is configured to receive, store and transfer data packets to and from a first and second set of memory chips. Transfer of an internal packet data from the control device to memory takes place such that a first record is stored in a second set of memory chips and additional records are stored in the first set of memory chips. In preferred embodiments, data is allocated in the second set of memory chips such that at least one additional transfer step takes place to the second set of memory chips compared with transfers to the first set of memory chips. In the additional transfer step(s), the first set of memory chips is masked from receiving data.
    Type: Grant
    Filed: September 27, 2007
    Date of Patent: November 9, 2010
    Assignee: Qimonda AG
    Inventor: Hermann Ruckerbauer
  • Publication number: 20100268880
    Abstract: Disclosed are a method, a system and a computer program product for operating a cache system. The cache system can include multiple cache lines, and a first cache line of the multiple of cache lines can include multiple cache cells, and a bus coupled to the multiple cache cells. In one or more embodiments, the bus can include a switch that is operable to receive a first control signal and to split the bus into first and second portions or aggregate the bus into a whole based on the first control signal. When the bus is split, a first cache cell and a second cache cell of the multiple cache cells are coupled to respective first and second portions of the bus. Data from the first and second cache cells can be selected through respective portions of the bus and outputted through a port of the cache system.
    Type: Application
    Filed: April 15, 2009
    Publication date: October 21, 2010
    Applicant: INTERNATIONAL BUISNESS MACHINES CORPORATION
    Inventors: Ravi Kumar Arimilli, Donald W. Plass, William John Starke
  • Patent number: 7793048
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Grant
    Filed: September 9, 2008
    Date of Patent: September 7, 2010
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Publication number: 20100174863
    Abstract: A system is described for providing scalable in-memory caching for a distributed database. The system may include a cache, an interface, a non-volatile memory and a processor. The cache may store a cached copy of data items stored in the non-volatile memory. The interface may communicate with devices and a replication server. The non-volatile memory may store the data items. The processor may receive an update to a data item from a device to be applied to the non-volatile memory. The processor may apply the update to the cache. The processor may generate an acknowledgement indicating that the update was applied to the non-volatile memory and may communicate the acknowledgment to the device. The processor may then communicate the update to a replication server. The processor may apply the update to the non-volatile memory upon receiving an indication that the update was stored by the replication server.
    Type: Application
    Filed: March 15, 2010
    Publication date: July 8, 2010
    Applicant: Yahoo! Inc.
    Inventors: Brian F. Cooper, Adam Silberstein, Utkarsh Srivastava, Raghu Ramakrishnan, Rodrigo Fonseca