Partitioned Cache, E.g., Separate Instruction And Operand Caches, Etc. (epo) Patents (Class 711/E12.046)
-
Patent number: 11275589Abstract: A processor interacts with a memory set including a cache memory, a first memory storing at least a first piece of information in a first information group, and a second memory storing at least a second piece of information in a second information group. In response to a first cache miss and following a first request from the processor for the first piece of information, the first piece of information obtained from the first memory is supplied to the processor. After a second request from the processor for the second piece of information, the second piece of information obtained from the second memory is supplied to the processor, even if the first information group is currently being transferred from the first memory for loading into the cache memory.Type: GrantFiled: September 17, 2019Date of Patent: March 15, 2022Assignees: STMicroelectronics (Rousset) SAS, STMicroelectronics (Grenoble 2) SASInventors: Sebastien Metzger, Silvia Brini
-
Patent number: 10606599Abstract: A system and method for using an operation (op) cache is disclosed. The system and method include an op cache for caching previously decoded instructions. The op cache includes a plurality of physically indexed and tagged instructions allowing sharing of instructions between threads. The op cache is chained through multiple ways allowing service of a plurality of instructions in a cache line. The op cache is stored between a shared operation storage and immediate/displacement storage to maximize capacity.Type: GrantFiled: December 9, 2016Date of Patent: March 31, 2020Assignee: ADVANCED MICRO DEVICES, INC.Inventor: David N. Suggs
-
Patent number: 9021188Abstract: A first portion of an asymmetric memory is configured as temporary storage for application data units with sizes corresponding to a small memory block that is smaller than the size of a logical write unit associated with the asymmetric memory. A portion of the remaining asymmetric memory is configured as a reconciled storage for application data units with varying sizes. A first application data unit is received for writing to the asymmetric memory. Based on computing the size of the first application data unit as corresponding to the small memory block, the first application data unit is written to the temporary storage. Upon determining that a threshold is reached, a memory write operation is performed for writing the application data units from the temporary storage to the reconciled storage. The application data units written to the reconciled storage are removed from the temporary storage.Type: GrantFiled: March 15, 2013Date of Patent: April 28, 2015Assignee: Virident Systems Inc.Inventors: Vijay Karamcheti, Ashish Singhai, Shibabrata Mondal, Swamy Gowda
-
Patent number: 8924683Abstract: The relay unit splits the storage area in the buffer into a plurality of partitioned areas, manages the same and, upon receiving a read request from the access request source, selects and allocates one or more from the plurality of partitioned areas and, on condition that the relevant partitioned areas are allocated, transmits the read request to the memory control unit, wherein the memory control unit reads the data requested in the received read request from the memory, splits the data which is read into a plurality of units, and transmits the same to the relay unit, wherein the relay unit stores each of the data transmitted from the memory control unit in each of the allocated partitioned areas sequentially, on condition that all of the data is stored, reads each of the data from each of the allocated partitioned areas, compiles each of the data which is read into one, transmits the same as read data to the access request source, and releases all of the respective allocated partitioned areas.Type: GrantFiled: April 21, 2011Date of Patent: December 30, 2014Assignee: Hitachi, Ltd.Inventor: Shuntaro Seno
-
Patent number: 8909867Abstract: The present invention provides a method and apparatus for allocating space in a unified cache. The method may include partitioning the unified cache into a first portion of lines that only store copies of instructions retrieved from a memory and a second portion of lines that only store copies of data retrieved from the memory.Type: GrantFiled: August 24, 2010Date of Patent: December 9, 2014Assignee: Advanced Micro Devices, Inc.Inventor: William L. Walker
-
Patent number: 8904113Abstract: Techniques, systems and an article of manufacture for caching in a virtualized computing environment. A method includes enforcing a host page cache on a host physical machine to store only base image data, and enforcing each of at least one guest page cache on a corresponding guest virtual machine to store only data generated by the guest virtual machine after the guest virtual machine is launched, wherein each guest virtual machine is implemented on the host physical machine.Type: GrantFiled: May 24, 2012Date of Patent: December 2, 2014Assignee: International Business Machines CorporationInventors: Han Chen, Hui Lei, Zhe Zhang
-
Patent number: 8862828Abstract: Method and apparatus to efficiently store and cache data. Cores of a processor and cache slices co-located with the cores may be grouped into a cluster. A memory space may be partitioned into address regions. The cluster may be associated with an address region from the address regions. Each memory address of the address region may be mapped to one or more of the cache slices grouped into the cluster. A cache access from one or more of the cores grouped into the cluster may be biased to the address region based on the association of the cluster with the address region.Type: GrantFiled: August 13, 2012Date of Patent: October 14, 2014Assignee: Intel CorporationInventors: Ravindra P. Saraf, Rahul Pal, Ashok Jagannathan
-
Patent number: 8738859Abstract: Hybrid caching techniques and garbage collection using hybrid caching techniques are provided. A determination of a measure of a characteristic of a data object is performed, the characteristic being indicative of an access pattern associated with the data object. A selection of one caching structure, from a plurality of caching structures, is performed in which to store the data object based on the measure of the characteristic. Each individual caching structure in the plurality of caching structures stores data objects has a similar measure of the characteristic with regard to each of the other data objects in that individual caching structure. The data object is stored in the selected caching structure and at least one processing operation is performed on the data object stored in the selected caching structure.Type: GrantFiled: September 13, 2012Date of Patent: May 27, 2014Assignee: International Business Machines CorporationInventors: Chen-Yong Cher, Michael K. Gschwind
-
Publication number: 20140082290Abstract: A mechanism is provided in a data processing system for enhancing wiring structure for a cache supporting an auxiliary data output. The mechanism splits the data cache into a first data portion and a second data portion. The first data portion provides a first set of data elements and the second data portion provides a second set of data elements. The mechanism connects a first data path to provide the first set of data elements to a primary output and connects a second data path to provide the second set of data elements to the primary output. The mechanism feeds the first data path back into the second data path and feeds the second data path back into the first data path. The mechanism connects a secondary output to the second data path.Type: ApplicationFiled: September 17, 2012Publication date: March 20, 2014Applicant: International Business Machines CorporationInventors: Christian Habermann, Walter Lipponer, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 8661200Abstract: Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.Type: GrantFiled: February 5, 2010Date of Patent: February 25, 2014Assignee: Nokia CorporationInventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
-
Patent number: 8639884Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.Type: GrantFiled: February 28, 2011Date of Patent: January 28, 2014Assignee: Freescale Semiconductor, Inc.Inventor: Thang M. Tran
-
Patent number: 8631198Abstract: An interface controller of a storage device configured to manage a write cache of the storage device responsive to changes in a voltage supply provided to the storage device. In one implementation, the interface controller reduces the size of the write cache responsive to the voltage supply dropping at or below a first threshold. The interface controller further disables write permissions to the write cache responsive the voltage supply dropping at or below a second threshold, wherein the second threshold is lower in magnitude that the first threshold. The interface controller periodically receives the voltage supply responsive to transmitting sequential requests to a servo firmware of the storage device.Type: GrantFiled: August 6, 2010Date of Patent: January 14, 2014Assignee: Seagate Technology LLCInventors: Choon Wei Ng, Chee Meng Leong, Poh Guat Bay, June Christian Ang, Kian Keong Ooi, Wei Kin Wan
-
Publication number: 20140006715Abstract: Method and apparatus to efficiently store and cache data. Cores of a processor and cache slices co-located with the cores may be grouped into a cluster. A memory space may be partitioned into address regions. The cluster may be associated with an address region from the address regions. Each memory address of the address region may be mapped to one or more of the cache slices grouped into the cluster. A cache access from one or more of the cores grouped into the cluster may be biased to the address region based on the association of the cluster with the address region.Type: ApplicationFiled: August 13, 2012Publication date: January 2, 2014Applicant: INTEL CORPORATIONInventors: Ravindra P. Saraf, Rahul Pal, Ashok Jagannathan
-
Publication number: 20130332676Abstract: In a cloud computing environment, a cache and a memory are partitioned into “colors”. The colors of the cache and the memory are allocated to virtual machines independently of one another. In order to provide cache isolation while allocating the memory and cache in different proportions, some of the colors of the memory are allocated to a virtual machine, but the virtual machine is not permitted to directly access these colors. Instead, when a request is received from the virtual machine for a memory page in one of the non-accessible colors, a hypervisor swaps the requested memory page with a memory page with a color that the virtual machine is permitted to access. The virtual machine is then permitted to access the requested memory page at the new color location.Type: ApplicationFiled: June 12, 2012Publication date: December 12, 2013Applicant: Microsoft CorporationInventors: Ramakrishna R. Kotla, Venugopalan Ramasubramanian
-
Patent number: 8607000Abstract: This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line.Type: GrantFiled: September 23, 2011Date of Patent: December 10, 2013Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Roger Kyle Castille, Joseph Raymond Michael Zbiciak, Dheera Balasubramanian
-
Patent number: 8595425Abstract: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.Type: GrantFiled: September 25, 2009Date of Patent: November 26, 2013Assignee: NVIDIA CorporationInventors: Alexander L. Minkin, Steven James Heinrich, RaJeshwaran Selvanesan, Brett W. Coon, Charles McCarver, Anjana Rajendran, Stewart G. Carlton
-
Publication number: 20130246710Abstract: A storage system is provided with a plurality of physical storage devices, a cache memory, a control device that is coupled to the plurality of physical storage devices and the cache memory, and a buffer part. The buffer part is a storage region that is formed by using at least a part of a storage region of the plurality of physical storage devices and that is configured to temporarily store at least one target data element that is to be transmitted to a predetermined target. The control device stores a target data element into a cache region that has been allocated to a buffer region (that is a part of the cache memory and that is a storage region of a write destination of the target data element for the buffer part). The control device transmits the target data element from the cache memory.Type: ApplicationFiled: March 15, 2012Publication date: September 19, 2013Inventor: Akira Deguchi
-
Patent number: 8539192Abstract: A method, system and computer program product for storing data in memory. An example system includes at least one multistage application configured to generate intermediate data in a generating stage of the application and consume the intermediate data in a subsequent consuming stage of the application. A runtime profiler is configured to monitor the application's execution and dynamically allocate memory to the application from an in-memory data grid.Type: GrantFiled: January 8, 2010Date of Patent: September 17, 2013Assignee: International Business Machines CorporationInventors: Claris Castillo, Michael J. Spreitzer, Malgorzata Steinder, Ian N. Whalley
-
Patent number: 8484405Abstract: Techniques are disclosed for managing memory within a virtualized system that includes a memory compression cache. Generally, the virtualized system may include a hypervisor configured to use a compression cache to temporarily store memory pages that have been compressed to conserve memory space. A “first-in touch-out” (FITO) list may be used to manage the size of the compression cache by monitoring the compressed memory pages in the compression cache. Each element in the FITO list corresponds to a compressed page in the compression cache. Each element in the FITO list records a time at which the corresponding compressed page was stored in the compression cache (i.e. an age). A size of the compression cache may be adjusted based on the ages of the pages in the compression cache.Type: GrantFiled: July 13, 2011Date of Patent: July 9, 2013Assignee: VMware, Inc.Inventors: Ali Mashtizadeh, Irfan Ahmad
-
Patent number: 8458439Abstract: A processor has an associated memory hierarchy including a cache memory. The processor includes an instruction sequencing unit that fetches instructions for processing, an operand data structure including a plurality of entries corresponding to operands of operations to be performed by the processor, and a computation engine. A first entry among the plurality of entries in the operand data structure specifies a first caching policy for a first operand, and a second entry specifies a second caching policy for a second operand. The computation engine computes and stores operands in the memory hierarchy in accordance with the cache policies indicated within the operand data structure.Type: GrantFiled: December 16, 2008Date of Patent: June 4, 2013Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy
-
Publication number: 20130138890Abstract: A method for performing dynamic configuration includes: freezing a bus between a dynamic configurable cache and a plurality of cores/processors by rejecting a request from any of the cores/processors during a bus freeze period, wherein the dynamic configurable cache is implemented with an on-chip memory; and adjusting a size of a portion of the dynamic configurable cache, wherein the portion of the dynamic configurable cache is capable of caching/storing information for one of the cores/processors. An associated apparatus is also provided. In particular, the apparatus includes the plurality of cores/processors, the dynamic configurable cache, and a dynamic configurable cache controller, and can operate according to the method.Type: ApplicationFiled: February 24, 2012Publication date: May 30, 2013Inventor: You-Ming Tsao
-
Patent number: 8443162Abstract: Techniques for controllably allocating a portion of a plurality of memory banks as cache memory are disclosed. To this end, a configuration tracker and a bank selector are employed. The configuration tracker configures whether each memory bank is to operate in a cache or not. The bank selector has a plurality of bank distributing functions. Upon receiving an incoming address, the bank selector determines the configuration of memory banks currently operating as the cache and applies an appropriate bank distributing function based on the configuration of memory banks. The applied bank distributing function utilizes bits in the incoming address to access one of the banks configured as being in the cache.Type: GrantFiled: January 21, 2005Date of Patent: May 14, 2013Assignee: QUALCOMM IncorporatedInventors: Thomas Philip Speier, James Norris Dieffenderfer, Ravi Rajagopalan
-
Publication number: 20130031310Abstract: A computer system includes: a main storage unit, a processing executing unit sequentially executing processing to be executed on virtual processors; a level-1 cache memory shared among the virtual processors; a level-2 cache memory including storage areas partitioned based on the number of the virtual processors, the storage areas each (i) corresponding to one of the virtual processors and (ii) holding the data to be used by the corresponding one of the virtual processors; a context memory holding a context item corresponding to the virtual processor; a virtual processor control unit saving and restoring a context item of one of the virtual processors; a level-1 cache control unit; and a level-2 cache control unit.Type: ApplicationFiled: October 4, 2012Publication date: January 31, 2013Applicant: PANASONIC CORPORATIONInventor: Panasonic Corporation
-
Publication number: 20130024621Abstract: The present invention relates to a coarse-grained reconfigurable array, comprising: at least one processor; a processing element array including a plurality of processing elements, and a configuration cache where commands being executed by the processing elements are saved; and a plurality of memory units forming a one-to-one mapping with the processor and the processing element array. The coarse-grained reconfigurable array further comprises a central memory performing data communications between the processor and the processing element array by switching the one-to-one mapping such that when the processor transfers data from/to a main memory to/from a frame buffer, a significant bottleneck phenomenon that may occur due to the limited bandwidth and latency of a system bus can be improved.Type: ApplicationFiled: June 1, 2010Publication date: January 24, 2013Applicant: SNU R & DB FoundationInventors: Ki Young Choi, Kyung Wook Chang, Jong Kyung Paek
-
Publication number: 20130007370Abstract: Implementations of the present disclosure involve an apparatus and/or method for allocating, dividing and accessing memory of a multi-threaded computing system based at least in part on the structural hierarchy of the components of the computing system. Allocating partitions of memory based on the hierarchy structure of the computing system may isolate the threads of the computing system such that cache-memory contention by a plurality of executing threads may be reduced. In general, the apparatus and/or method may analyze the hierarchal structure of the components of the computing system utilized in the execution of applications and divide the available memory of the system between the various components. This division of the system memory creates exclusive partitions in the caches of the computing system based on the processor and cache hierarchy. The partitions may be used by different applications or by different sections of the same application to store accessed memory in cache for quick retrieval.Type: ApplicationFiled: July 1, 2011Publication date: January 3, 2013Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Alok Parikh, Amandeep Singh
-
Publication number: 20130007341Abstract: In some embodiments, a non-volatile cache memory may include a segmented non-volatile cache memory configured to be located between a system memory and a mass storage device of an electronic system and a controller coupled to the segmented non-volatile cache memory, wherein the controller is configured to control utilization of the segmented non-volatile cache memory. The segmented non-volatile cache memory may include a file cache segment, the file cache segment to store complete files in accordance with a file cache policy, and a block cache segment, the block cache segment to store one or more blocks of one or more files in accordance with a block cache policy, wherein the block cache policy is different from the file cache policy.Type: ApplicationFiled: June 26, 2012Publication date: January 3, 2013Inventors: Dale Juenemann, R. Scott Tetrick, Oscar Pinto
-
Patent number: 8312219Abstract: Hybrid caching techniques and garbage collection using hybrid caching techniques are provided. A determination of a measure of a characteristic of a data object is performed, the characteristic being indicative of an access pattern associated with the data object. A selection of one caching structure, from a plurality of caching structures, is performed in which to store the data object based on the measure of the characteristic. Each individual caching structure in the plurality of caching structures stores data objects has a similar measure of the characteristic with regard to each of the other data objects in that individual caching structure. The data object is stored in the selected caching structure and at least one processing operation is performed on the data object stored in the selected caching structure.Type: GrantFiled: March 2, 2009Date of Patent: November 13, 2012Assignee: International Business Machines CorporationInventors: Chen-Yong Cher, Michael K. Gschwind
-
Publication number: 20120278556Abstract: Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory.Type: ApplicationFiled: July 5, 2012Publication date: November 1, 2012Inventor: Hideyuki KOSEKI
-
Patent number: 8291176Abstract: The disclosed embodiments may relate to protection domain group, which may include a memory region associated with a process. The protection domain group may also include a plurality of memory windows associated with the memory region. Also included may be a plurality of protection domains, each of which may correspond to a memory window. The protection domains may allow access to the memory region via a corresponding memory window.Type: GrantFiled: March 27, 2003Date of Patent: October 16, 2012Assignee: Hewlett-Packard Development Company, L.P.Inventors: Jeffrey Hilland, David J. Garcia
-
Patent number: 8285971Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, an instruction sequencing unit that fetches instructions for execution by the at least one execution unit, and an address generation accelerator. The address generation accelerator, responsive to an initiation signal received from the instruction sequencing unit, computes and outputs first and second effective addresses of operands of an operation.Type: GrantFiled: December 16, 2008Date of Patent: October 9, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy
-
Patent number: 8281106Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.Type: GrantFiled: December 16, 2008Date of Patent: October 2, 2012Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Balaram Sinharoy
-
Patent number: 8275969Abstract: A data storage area of a data storage device is partitioned logically between a user storage area and a device storage area. Source data stored securely in the device storage area is copied as derivative data to the user storage area, or is used as a basis for creating derivative data stored in the user storage area, whenever the data storage device is initialized. In one embodiment, the data storage area is read-write and the device storage area has embodied thereon device system code, executed by a controller of the data storage device, for writing source data to the device storage area only if the source data satisfies a predetermined condition. Examples of derivative data include an autorun file, a volume label and user identification. Data from a host may be stored reversibly in the user storage area but must be stored securely in the device storage area.Type: GrantFiled: February 22, 2005Date of Patent: September 25, 2012Assignee: Sandisk IL Ltd.Inventors: Dov Moran, Eyal Bychkov
-
Publication number: 20120226869Abstract: When a storage capacity of a file server is expanded using an online storage service, elimination of an upper-limit constraint of the file size as a constraint of the online storage service and reduction in the communication cost are realized. A kernel module including logical volumes on the online storage service divides a file into block files at a fixed length and stores and manages the block files to prevent the upper-limit constraint of the file size. When a READ/WRITE request is generated for a mounted file system, only necessary block files are downloaded and used from the online storage service based on an offset value and size information to optimize the communication and realize the communication cost reduction.Type: ApplicationFiled: March 31, 2010Publication date: September 6, 2012Applicant: HITACHI SOLUTIONS, LTD.Inventors: Yasuhiro Kirihata, Hideyuki Kashiwase
-
Publication number: 20120215969Abstract: According to one embodiment, a storage device includes: a backup unit configured to perform backup at the time of a power shutoff; a storage module configured to store information; a cache memory configured to perform caching of the storage module; and a controller configured to adjust a size of a cache to the cache memory according to exhaustion of the backup unit.Type: ApplicationFiled: February 22, 2012Publication date: August 23, 2012Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Masaaki Tamura, Masakazu Tsuruoka
-
Patent number: 8250305Abstract: Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line.Type: GrantFiled: March 19, 2008Date of Patent: August 21, 2012Assignee: International Business Machines CorporationInventors: Gary E. Strait, Deanna P. Dunn, Michael F. Fee, Pak-kin Mak, Robert J. Sonnelitter, III
-
Patent number: 8230196Abstract: Example embodiments for configuring a non-volatile memory device may comprise configuring M physical partitions of the non-volatile memory into two or more banks, wherein the two or more banks respectively comprise one or more of the M physical partitions, and wherein at least a first of the M physical partitions comprises a first size and wherein at least a second of the M physical partitions comprises a second size.Type: GrantFiled: May 28, 2009Date of Patent: July 24, 2012Assignee: Micron Technology, Inc.Inventors: Emanuele Confalonieri, Corrado Villa
-
Patent number: 8219757Abstract: In some embodiments, a processor-based system includes a processor, a system memory coupled to the processor, a mass storage device, a cache memory located between the system memory and the mass storage device, and code stored on the processor-based system to cause the processor-based system to utilize the cache memory. The code may be configured to cause the processor-based system to preferentially use only a selected size of the cache memory to store cache entries having less than or equal to a selected number of cache hits. Other embodiments are disclosed and claimed.Type: GrantFiled: September 30, 2008Date of Patent: July 10, 2012Assignee: Intel CorporationInventors: Glenn Hinton, Dale Juenemann, R. Scott Tetrick
-
Publication number: 20120166730Abstract: The present invention is directed to a circuit for managing data movement between an interface supporting the PLB6 bus protocol, an interface supporting the AMBA AXI bus protocol, and internal data arrays of a cache controller and/or on-chip memory peripheral. The circuit implements register file buffers for gathering data to bridge differences between the bus protocols and bus widths in a manner which addresses latency and performance concerns of the overall system.Type: ApplicationFiled: December 27, 2010Publication date: June 28, 2012Applicant: LSI CORPORATIONInventors: Judy M. Gehman, Jerome M. Meyer
-
Publication number: 20120166723Abstract: An embodiment of this invention divides a cache memory of a storage system into a plurality of partitions and information in one or more of the partitions is composed of data different from user data and including control information. The storage system dynamically swaps data between an LU storing control information and a cache partition. Through this configuration, in a storage system having an upper limit in the capacity of the cache memory, a large amount of control information can be used while access performance to control information is kept.Type: ApplicationFiled: December 27, 2010Publication date: June 28, 2012Inventors: Akihiko Araki, Yusuke Nonaka
-
Patent number: 8190823Abstract: An apparatus, system, and method are disclosed for deduplicating storage cache data. A storage cache partition table has at least one entry associating a specified storage address range with one or more specified storage partitions. A deduplication module creates an entry in the storage cache partition table wherein the specified storage partitions contain identical data to one another within the specified storage address range thus requiring only one copy of the identical data to be cached in a storage cache. A read module accepts a storage address within a storage partition of a storage subsystem, to locate an entry wherein the specified storage address range contains the storage address, and to determine whether the storage partition is among the one or more specified storage partitions if such an entry is found.Type: GrantFiled: September 18, 2008Date of Patent: May 29, 2012Assignee: Lenovo (Singapore) Pte. Ltd.Inventors: Rod D. Waltermann, Mark Charles Davis
-
Publication number: 20120054442Abstract: The present invention provides a method and apparatus for allocating space in a unified cache. The method may include partitioning the unified cache into a first portion of lines that only store copies of instructions retrieved from a memory and a second portion of lines that only store copies of data retrieved from the memory.Type: ApplicationFiled: August 24, 2010Publication date: March 1, 2012Inventor: William L. Walker
-
Patent number: 8108609Abstract: A hardware description language (HDL) design structure embodied on a machine-readable data storage medium includes elements that when processed in a computer aided design system generates a machine executable representation of a device for implementing dynamic refresh protocols for DRAM based cache. The HDL design structure further includes a DRAM cache partitioned into a refreshable portion and a non-refreshable portion; and a cache controller configured to assign incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines; wherein cache lines corresponding to data having a usage history below a defined frequency are assigned by the controller to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.Type: GrantFiled: May 23, 2008Date of Patent: January 31, 2012Assignee: International Business Machines CorporationInventors: John E. Barth, Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
-
Publication number: 20110320687Abstract: Embodiments of the invention are directed to reducing write amplification in a cache with flash memory used as a write cache. An embodiment of the invention includes partitioning at least one flash memory device in the cache into a plurality of logical partitions. Each of the plurality of logical partitions is a logical subdivision of one of the at least one flash memory device and comprises a plurality of memory pages. Data are buffered in a buffer. The data includes data to be cached, and data to be destaged from the cache to a storage subsystem. Data to be cached are written from the buffer to the at least one flash memory device. A processor coupled to the buffer is provided with access to the data written to the at least one flash memory device from the buffer, and a location of the data written to the at least one flash memory device within the plurality of logical partitions. The data written to the at least one flash memory device are destaged from the buffer to the storage subsystem.Type: ApplicationFiled: June 29, 2010Publication date: December 29, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Wendy A. Belluomini, Binny S. Gill, Michael A. Ko
-
Patent number: 7970997Abstract: A program section layout method capable of improving space efficiency of a cache memory. A grouping unit groups program sections into section groups so that the total size of the program sections composing each section group does not exceed cache memory size. A layout optimization unit optimizes the layout of section group storage regions by combining each section group and a program section that does not belong to any section groups or by combining section groups while keeping the ordering relations of the program sections composing each section group.Type: GrantFiled: December 23, 2004Date of Patent: June 28, 2011Assignee: Fujitsu Semiconductor LimitedInventor: Manabu Watanabe
-
Patent number: 7941585Abstract: A RISC-type processor includes a main register file and a data cache. The data cache can be partitioned to include a local memory, the size of which can be dynamically changed on a cache block basis while the processor is executing instructions that use the main register file. The local memory can emulate as an additional register file to the processor and can reside at a virtual address. The local memory can be further partitioned for prefetching data from a non-cacheable address to be stored/loaded into the main register file.Type: GrantFiled: December 17, 2004Date of Patent: May 10, 2011Assignee: Cavium Networks, Inc.Inventors: David H. Asher, David A. Carlson, Richard E. Kessler
-
Patent number: 7877547Abstract: A system, method and circuit for efficiently managing a cache storage device. A cache storage device may include a cache management module. The cache management module may be adapted to generate a management unit and to associate the management unit with new data that is to be written into the cache. The cache management module may be further adapted to assign two or more allocation units for each management unit, to store the new data in the cache. A cache management module may include a management unit module. The management unit module may be adapted to generate management units associated with predefined global cache management functions. The cache management module may further include an allocation unit module in communication with the management unit module. The allocation unit module may be adapted to assign allocation units for storing data written into the cache.Type: GrantFiled: May 6, 2005Date of Patent: January 25, 2011Assignee: International Business Machines CorporationInventors: Ofir Zohar, Yaron Revah, Haim Helman, Dror Cohen, Shemer Schwartz
-
Patent number: 7831797Abstract: A memory system is functionally designed so that, despite operation without an error correction device, memory chips of a memory module that are actually provided for error correction are concomitantly used for the data transfer. A control device is configured to receive, store and transfer data packets to and from a first and second set of memory chips. Transfer of an internal packet data from the control device to memory takes place such that a first record is stored in a second set of memory chips and additional records are stored in the first set of memory chips. In preferred embodiments, data is allocated in the second set of memory chips such that at least one additional transfer step takes place to the second set of memory chips compared with transfers to the first set of memory chips. In the additional transfer step(s), the first set of memory chips is masked from receiving data.Type: GrantFiled: September 27, 2007Date of Patent: November 9, 2010Assignee: Qimonda AGInventor: Hermann Ruckerbauer
-
Publication number: 20100268880Abstract: Disclosed are a method, a system and a computer program product for operating a cache system. The cache system can include multiple cache lines, and a first cache line of the multiple of cache lines can include multiple cache cells, and a bus coupled to the multiple cache cells. In one or more embodiments, the bus can include a switch that is operable to receive a first control signal and to split the bus into first and second portions or aggregate the bus into a whole based on the first control signal. When the bus is split, a first cache cell and a second cache cell of the multiple cache cells are coupled to respective first and second portions of the bus. Data from the first and second cache cells can be selected through respective portions of the bus and outputted through a port of the cache system.Type: ApplicationFiled: April 15, 2009Publication date: October 21, 2010Applicant: INTERNATIONAL BUISNESS MACHINES CORPORATIONInventors: Ravi Kumar Arimilli, Donald W. Plass, William John Starke
-
Patent number: 7793048Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.Type: GrantFiled: September 9, 2008Date of Patent: September 7, 2010Assignee: International Business Machines CorporationInventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
-
Publication number: 20100174863Abstract: A system is described for providing scalable in-memory caching for a distributed database. The system may include a cache, an interface, a non-volatile memory and a processor. The cache may store a cached copy of data items stored in the non-volatile memory. The interface may communicate with devices and a replication server. The non-volatile memory may store the data items. The processor may receive an update to a data item from a device to be applied to the non-volatile memory. The processor may apply the update to the cache. The processor may generate an acknowledgement indicating that the update was applied to the non-volatile memory and may communicate the acknowledgment to the device. The processor may then communicate the update to a replication server. The processor may apply the update to the non-volatile memory upon receiving an indication that the update was stored by the replication server.Type: ApplicationFiled: March 15, 2010Publication date: July 8, 2010Applicant: Yahoo! Inc.Inventors: Brian F. Cooper, Adam Silberstein, Utkarsh Srivastava, Raghu Ramakrishnan, Rodrigo Fonseca