Partitioned Cache Patents (Class 711/129)
-
Patent number: 7555611Abstract: A cache subsystem may comprise a multi-way set associative cache and a data memory that holds a contiguous block of memory defined by an address stored in a register. Local variables (e.g., Java local variables) may be stored in the data memory. The data memory preferably is adapted to store two groups of local variables. A first group comprises local variables associated with finished methods and a second group comprises local variables associated with unfinished methods. Further, local variables are saved to, or fetched from, external memory upon a context change based on a threshold value differentiating the first and second groups. The first value may comprise a threshold address or an allocation bit associated with each of a plurality of lines forming the data memory.Type: GrantFiled: July 31, 2003Date of Patent: June 30, 2009Assignee: Texas Instruments IncorporatedInventors: Serge Lasserre, Maija Kuusela, Gerard Chauvel
-
Publication number: 20090164730Abstract: A cache that supports sub-socket partitioning is discussed. Specifically, the cache supports different quality of service levels and victim cache line selection for a cache miss operation. The different quality of service levels allow for programmable ceiling usage and floor usage thresholds that allow for different techniques for victim cache line selection.Type: ApplicationFiled: November 7, 2008Publication date: June 25, 2009Inventors: Ajay Harikumar, Tessil Thomas, Biju Puthur Simon
-
Publication number: 20090164751Abstract: Embodiments enable sub-socket partitioning that facilitates access among a plurality of partitions to a shared resource. A round robin arbitration policy is to allow each partition, within a socket, that may utilize a different operating system, access to the shared resource based at least in part on whether an assigned bandwidth parameter for each partition is consumed. Embodiments may further include support for virtual channels.Type: ApplicationFiled: November 7, 2008Publication date: June 25, 2009Inventors: Ajay Harikumar, Tessil Thomas, Biju Puthur Simon
-
Patent number: 7552284Abstract: Methods for a treatment of cached objects are described. In one embodiment, management of a region of a cache is configured with an eviction policy plug-in. The eviction policy plug-in includes an eviction timing component and a sorting component, with the eviction timing component including code to implement an eviction timing method, and the eviction timing method to trigger eviction of an object from the region of cache. The sorting component includes code to implement a sorting method to identify an object that is eligible for eviction from said region of cache. The sorting method includes identifying an object for eviction that is cached in the region of cache and that has been used less frequently than other objects that are cached in the region of cache.Type: GrantFiled: December 28, 2004Date of Patent: June 23, 2009Assignee: SAP AGInventors: Petio G. Petev, Michael Wintergerst
-
Publication number: 20090157969Abstract: A method, computer program product, and data processing system for managing a input/output buffer cache for prevention of deadlocks are disclosed. In a preferred embodiment, automatic buffer cache resizing is performed whenever the number of free buffers in the buffer cache diminishes to below a pre-defined threshold. This resizing adds a pre-defined number of additional buffers to the buffer cache, up to a pre-defined absolute maximum buffer cache size. To prevent deadlocks, an absolute minimum number of free buffers are reserved to ensure that sufficient free buffers for performing a buffer cache resize are always available. In the event that the buffer cache becomes congested and cannot be resized further, threads whose buffer demands cannot be immediately satisfied are blocked until sufficient free buffers become available.Type: ApplicationFiled: December 18, 2007Publication date: June 18, 2009Inventors: Matthew J. Harding, Mitchell P. Harding, Joshua D. Miers
-
Patent number: 7549022Abstract: Avoiding cache-line sharing in virtual machines can be implemented in a system running a host and multiple guest operating systems. The host facilitates hardware access by a guest operating system and oversees memory access by the guest. Because cache lines are associated with memory pages that are spaced at regular intervals, the host can direct guest memory access to only select memory pages, and thereby restrict guest cache use to one or more cache lines. Other guests can be restricted to different cache lines by directing memory access to a separate set of memory pages.Type: GrantFiled: July 21, 2006Date of Patent: June 16, 2009Assignee: Microsoft CorporationInventor: Brandon S. Baker
-
Patent number: 7546422Abstract: A method and apparatus for the synchronization of distributed caches. More particularly, the present invention to cache memory systems and more particularly to a hierarchical caching protocol suitable for use with distributed caches, including use within a caching input/output (I/O) hub.Type: GrantFiled: August 28, 2002Date of Patent: June 9, 2009Assignee: Intel CorporationInventors: Robert T George, Mathew A Lambert, Tony S Rand, Robert G Blankenship, Kenneth C Creta
-
Patent number: 7546426Abstract: A storage includes: host interface units; file control processors which receives a file input/output request and translates the file input/output request into a data input/output request; file control memories which store translation control data; groups of disk drives; disk control processors; disk interface units which connect the groups of disk drives and the disk control processors; cache memories; and inter-processor communication units. The storage logically partitions these devices to cause the partitioned devices to operate as two or more virtual NASs.Type: GrantFiled: December 21, 2006Date of Patent: June 9, 2009Assignee: Hitachi, Ltd.Inventors: Kentaro Shimada, Akiyoshi Hashimoto
-
Publication number: 20090144506Abstract: A method for implementing dynamic refresh protocols for DRAM based cache includes partitioning a DRAM cache into a refreshable portion and a non-refreshable portion, and assigning incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines. Cache lines corresponding to data having a usage history below a defined frequency are assigned to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.Type: ApplicationFiled: December 4, 2007Publication date: June 4, 2009Inventors: John E. Barth, JR., Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
-
Patent number: 7543042Abstract: A method for accessing an internal dynamic cache of a Websphere-type Application Server (WAS) from an external component that includes the step of establishing a software interface component within the WAS. The software interface component can receive a request from the external component. The request can include an identifier for a cache object and at least one dictate concerning the cache object. The external cache component can lack privileges to directly execute programmatic actions upon the cache object. The software interface component can trigger a programmatic action in accordance with the dictate. The programmatic action can involve the cache object, wherein the programmatic action utilizes the internal dynamic cache and involves the cache object. The programmatic action can be an action performed local to the WAS.Type: GrantFiled: April 28, 2004Date of Patent: June 2, 2009Assignee: International Business Machines CorporationInventors: Victor S. Moore, Wendi L. Nusbickel, Ricardo Dos Santos
-
Patent number: 7539820Abstract: Embodiments of the invention allow cache control optimized for the processing characteristics of application programs, and thus improve data transfer efficiency. In one embodiment, a disk device includes a disk; a cache for temporarily saving data that was read in from the disk, and data that was transferred from a host; and a controller for controlling data transfer between the cache and the host and between the cache and the disk; in which an independent cache area is set for each command type for application programs each different in data-processing policy can be set in the cache, and efficient read-ahead that utilizes the accessibility of the application programs each different in data-processing policy, can be realized by controlling the manner of read-ahead for each command type.Type: GrantFiled: April 19, 2005Date of Patent: May 26, 2009Assignee: Hitachi Global Storage Technologies Netherlands B.V.Inventor: Yukie Hiratsuka
-
Publication number: 20090129138Abstract: It is an object of the present invention to provide a semiconductor integrated circuit having a chip layout that reduces line length to achieve faster processing. A cache comprises a TAG memory module and a cache data memory module. The cache data memory module is divided into first and second cache data memory modules which are disposed on both sides of the TAG memory module, and input/output circuits of a data TLB are opposed to the input/output circuit of the TAG memory module and the input/output circuits of the first and second cache data memory modules across a bus area to reduce the line length to achieve faster processing.Type: ApplicationFiled: October 16, 2008Publication date: May 21, 2009Applicant: Panasonic CorporationInventor: Masaya Sumita
-
Patent number: 7536692Abstract: In general, in one aspect, the disclosure describes a processor that includes an instruction store to store instructions of at least a portion of at least one program and multiple engines coupled to the shared instruction store. The engines provide multiple execution threads and include an instruction cache to cache a subset of the at least the portion of the at least one program from the instruction store, with different respective portions of the engine's instruction cache being allocated to different respective ones of the engine threads.Type: GrantFiled: November 6, 2003Date of Patent: May 19, 2009Assignee: Intel CorporationInventors: Sridhar Lakshmanamurthy, Wilson Y. Liao, Prashant R. Chandra, Jeen-Yuan Miin, Yim Pun
-
Patent number: 7536510Abstract: A cache read request is received at a cache comprising a plurality of data arrays, each of the data arrays comprising a plurality of ways. Cache line data from each most recently used way of each of the plurality of data arrays is selected in response to the cache read request and selecting a first data of the received cache line data from the most recently used way of the cache. An execution of an instruction is stalled if data identified by the cache read request is not present in the cache line data from the most recently used way of the cache. A second data from a most recently used way of one of the plurality of data arrays other than the most recently used data array is selected as comprising data identified by the cache read request. The second data is provided for use during the execution of the instruction.Type: GrantFiled: October 3, 2005Date of Patent: May 19, 2009Assignee: Advanced Micro Devices, Inc.Inventor: Stephen P. Thompson
-
Publication number: 20090119666Abstract: The present invention provides an apparatus for cooperative distributed task management in a storage subsystem with multiple controllers using cache locking. The present invention distributes a task across a set of controllers acting in a cooperative rather than a master/slave nature to perform discrete components of the subject task on an as-available basis. This minimizes the amount of time required to perform incidental data manipulation tasks, thus reducing the duration of instances of degraded system performance.Type: ApplicationFiled: January 5, 2009Publication date: May 7, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian Dennis McKean, Randall Alan Pare
-
Patent number: 7529891Abstract: Balanced prefetching automatically balances the benefits of prefetching data that has not been accessed recently against the benefits of caching recently accessed data, and can be applied to most types of structured data without needing application-specific details or hints. Balanced prefetching is performed in applications in a computer system, such as storage-centric applications, including file systems and databases. Balanced prefetching exploits the structure of the data being prefetched, providing superior application throughput. For a fixed amount of memory, it is automatically and dynamically determined how much memory should be devoted to prefetching.Type: GrantFiled: September 19, 2005Date of Patent: May 5, 2009Assignee: Microsoft CorporationInventors: Chandramohan A. Thekkath, John P. MacCormick, Lidong Zhou, Nicholas Charles Murphy
-
Patent number: 7526608Abstract: Methods and apparatus provide a processor for operative connection to a main memory for storing data, the processor being operable to request at least some of the data for use; and a local memory in operative connection with the processor such that the data may be stored therein for use by the processor, the local memory not being a hardware cache memory, wherein the processor is operable to execute application program interface code that configures the local memory to include at least one software invoked cache memory area therein.Type: GrantFiled: May 24, 2005Date of Patent: April 28, 2009Assignee: Sony Computer Entertainment Inc.Inventor: Masahiro Yasue
-
Patent number: 7523262Abstract: An apparatus and method provide persistent data during a user session on a networked computer system. A global data cache is divided into three sections: trusted, protected, and unprotected. An authorization mechanism stores and retrieves authorization data from the trusted section of the global data store. A common session manager stores and retrieves data from the protected and unprotected sections of the global data cache. Using the authorization mechanism, software applications may verify that a user is authorized without prompting the user for authorization information. Using the common session manager, software applications may store and retrieve data to and from the global data store, allowing the sharing of data during a user session. After the user session terminates, the data in the global data cache corresponding to the user session is invalidated.Type: GrantFiled: March 13, 2008Date of Patent: April 21, 2009Assignee: International Business Machines CorporationInventor: James Casazza
-
Patent number: 7512951Abstract: A method for designing a time-sliced and multi-threaded architecture comprises the steps of conducting a thorough analysis of a range of applications and building a specific processor to accommodate the range of applications. In one embodiment, the thorough analysis includes extracting real time aspects from each application, determining optimal granularity in the architecture based on the real time aspects of each application, and adjusting the optimal granularity based on acceptable context switching overhead.Type: GrantFiled: July 31, 2001Date of Patent: March 31, 2009Assignee: Infineon Technologies AGInventors: Keith Rieken, Joel D. Medlock, David M. Holmes
-
Publication number: 20090083489Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first directory to access the first array slice while using a second directory to access the second array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In one embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. The cache array is arranged with rows and columns of cache sectors wherein a cache line is spread across sectors in different rows and columns, with a portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency.Type: ApplicationFiled: December 1, 2008Publication date: March 26, 2009Inventors: Leo James Clark, James Stephen Fields, JR., Guy Lynn Guthrie, William John Starke
-
Patent number: 7509440Abstract: A programmable controller includes a CPU unit, a communication unit and peripheral units connected together through an internal bus. The communication unit has a bus master function, including a cache memory for recording IO data stored in the memory of an input-output unit. When a message is received, it is judged whether the IO data stored in the memory of the input-output unit specified by this message is updated or not. If the data are not updated, a response is created based on the IO data stored in the IO data stored in the cache memory. If the data are updated, the input-output unit is accessed and updated IO data are obtained and a response is created based on the obtained IO data.Type: GrantFiled: April 27, 2007Date of Patent: March 24, 2009Assignee: OMRON CorporationInventor: Shinichiro Kawaguchi
-
Publication number: 20090070532Abstract: A system and method for using a single test case to test each sector within multiple congruence classes is presented. A test case generator builds a test case for accessing each sector within a congruence class. Since a congruence class spans multiple congruence pages, the test case generator builds the test case over multiple congruence pages in order for the test case to test the entire congruence class. During design verification and validation, a test case executor modifies a congruence class identifier (e.g., patches a base register), which forces the test case to test a specific congruence class. By incrementing the congruence class identifier after each execution of the test case, the test case executor is able to test each congruence class in the cache using a single test case.Type: ApplicationFiled: September 11, 2007Publication date: March 12, 2009Inventors: Vinod Bussa, Shubhodeep Roy Choudhury, Manoj Dusanapudi, Sunil Suresh Hatti, Shakti Kapoor, Batchu Naga Venkata Satyanarayana
-
Publication number: 20090063775Abstract: The present invention provides a system and a method for a cache partitioning technique for application tasks based on the scheduling information in multiprocessors. Cache partitioning is performed dynamically based on the information of the pattern of task scheduling provided by the task scheduler (405). Execution behavior of the application tasks is obtained from the task scheduler (405) and partitions are allocated (415) to only a subset of application tasks, which are going to be executed in the upcoming clock cycles. The present invention will improve the cache utilization by avoiding unnecessary reservation of the cache partitions for the executing application tasks during the entire duration of their execution and hence an effective utilization of the cache is achieved.Type: ApplicationFiled: September 20, 2006Publication date: March 5, 2009Inventors: Jeroen Molema, Wilko Westerhof, Bartele Henrik De Vries, Reinier Niels Lap, Olaf Martin De Jong, Bart-Jan Zwart, Johannes Rogier De Vrind
-
Patent number: 7500058Abstract: A computer system acquires mapping information of data storage regions in respective layers from a layer of DBMSs to a layer of storage subsystems, grasps correspondence between DB data and storage positions of each storage subsystem on the basis of the mapping information, decides a cache partitioning in each storage subsystem on the basis of the correspondence and sets the cache partitioning for each storage subsystem. When cache allocation in the DBMS or the storage subsystem needs to be changed, information for estimating the cache effect due to the change in cache allocation acquired by the DBMS is used for estimating the cache effect in the storage subsystem.Type: GrantFiled: June 5, 2007Date of Patent: March 3, 2009Assignee: Hitachi, Ltd.Inventors: Kazuhiko Mogi, Norifumi Nishikawa
-
Publication number: 20090049248Abstract: A method and computer system for reducing the wiring congestion, required real estate, and access latency in a cache subsystem with a sectored and sliced lower cache by re-configuring sector-to-slice allocation and the lower cache addressing scheme. With this allocation, sectors having discontiguous addresses are placed within the same slice, and a reduced-wiring scheme is possible between two levels of lower caches based on this re-assignment of the addressable sectors within the cache slices. Additionally, the lower cache effective address tag is re-configured such that the address fields previously allocated to identifying the sector and the slice are switched relative to each other's location within the address tag. This re-allocation of the address bits enables direct slice addressing based on the indicated sector.Type: ApplicationFiled: August 16, 2007Publication date: February 19, 2009Inventors: Leo James Clark, James Stephen Fields, JR., Guy Lynn Guthrie, William John Starke, Derek Edward Williams, Phillip G. Williams
-
Patent number: 7493607Abstract: A system, for use with a compiler architecture framework, includes performing a statically speculative compilation process to extract and use speculative static information, encoding the speculative static information in an instruction set architecture of a processor, and executing a compiled computer program using the speculative static information, wherein executing supports static speculation driven mechanisms and controls.Type: GrantFiled: July 9, 2002Date of Patent: February 17, 2009Assignee: BlueRISC Inc.Inventor: Csaba Andras Moritz
-
Patent number: 7490200Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first cache directory to access the first cache array slice while using a second cache directory to access the second cache array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In the illustrative embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. An address tag associated with a load request is transmitted from the processor core with a designated bit that associates the address tag with only one of the cache array slices whose corresponding directory determines whether the address tag matches a currently valid cache entry.Type: GrantFiled: February 10, 2005Date of Patent: February 10, 2009Assignee: International Business Machines CorporationInventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, William John Starke
-
Publication number: 20090037660Abstract: A time-based system and method are provided for controlling the management of cache memory. The method accepts a segment of data, and assigns a cache lock-time with a time duration to the segment. If a cache line is available, the segment is stored (in cache). The method protects the segment stored in the cache line from replacement until the expiration of the lock-time. Upon the expiration of the lock-time, the cache line is automatically made available for replacement. An available cache line is located by determining that the cache line is empty, or by determining that the cache line is available for a replacement segment. In one aspect, the cache lock-time is assigned to the segment by accessing a list with a plurality of lock-times having a corresponding plurality of time duration, and selecting from the list. In another aspect, the lock-time durations are configurable by the user.Type: ApplicationFiled: August 4, 2007Publication date: February 5, 2009Applicant: Applied Micro Circuits CorporationInventor: Mark Fairhurst
-
Patent number: 7484043Abstract: A multiprocessor computer system has a plurality of processing nodes which use processor state information to determine which coherent caches in the system are required to examine a coherency transaction produced by a single originating processor's storage request. A node of the computer has dynamic coherency boundaries such that the hardware uses only a subset of the total processors in a large system for a single workload at any specific point in time and can optimize the cache coherency as the supervisor software or firmware expands and contracts the number of processors which are being used to run any single workload. Multiple instances of a node can be connected with a second level controller to create a large multiprocessor system. The node controller uses the mode bits to determine which processors must receive any given transaction that is received by the node controller.Type: GrantFiled: June 25, 2003Date of Patent: January 27, 2009Assignee: International Business Machines CorporationInventors: Thomas J. Heller, Jr., Richard I. Baum, Michael Ignatowski, James W. Rymarczyk
-
Patent number: 7484047Abstract: A terminal apparatus and method for controlling access by a processor and coprocessor to data buses that connect memories.Type: GrantFiled: August 13, 2004Date of Patent: January 27, 2009Assignee: Samsung Electronics Co., Ltd.Inventor: Chae-Whan Lim
-
Publication number: 20090024798Abstract: The invention provides a method of storing data in a computing device, the method including the steps of creating a memory file system in non-pageable kernel memory of the computing device, writing data to the memory file system and transferring the written data to a pageable memory space allocated to a user process running on the computing device. An advantage of such a design is that, initially, the data of the memory based file system can be kept in the non-pageable kernel memory, minimising the need to perform context switches. However, the data can be transferred to pageable memory when necessary, such that the amount of kernel memory used by the file system can be minimised.Type: ApplicationFiled: July 16, 2008Publication date: January 22, 2009Applicant: Hewlett-Packard Development Company, L.P.Inventor: Alban Kit Kupar War Lyndem
-
Patent number: 7478218Abstract: A runtime code manipulation system is provided that supports code transformations on a program while it executes. The runtime code manipulation system uses code caching technology to provide efficient and comprehensive manipulation of an application running on an operating system and hardware. The code cache includes a system for automatically keeping the code cache at an appropriate size for the current working set of an application running.Type: GrantFiled: February 17, 2006Date of Patent: January 13, 2009Assignee: VMware, Inc.Inventors: Derek L. Bruening, Saman P. Amarasinghe
-
Patent number: 7475190Abstract: Methods for quickly accessing data residing in a cache of one processor, by another processor, while avoiding lengthy accesses to main memory are provided. A portion of the cache may be placed in a lock set mode by the processor in which it resides. While in the lock set mode, this portion of the cache may be accessed directly by another processor without lengthy “backing” writes of the accessed data to main memory.Type: GrantFiled: October 8, 2004Date of Patent: January 6, 2009Assignee: International Business Machines CorporationInventors: Russell D. Hoover, Eric O. Mejdrich, Sandra S. Woodward
-
Patent number: 7475194Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.Type: GrantFiled: January 2, 2008Date of Patent: January 6, 2009Assignee: International Business Machines CorporationInventors: Jos Accapadi, Andrew Dunshea, Greg R. Mewhinney, Mysore Sathyanaranyana Srinivas
-
Publication number: 20090006758Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.Type: ApplicationFiled: September 9, 2008Publication date: January 1, 2009Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, Willliam John Starke, Jeffrey Adam Stuecheli
-
Publication number: 20090006759Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.Type: ApplicationFiled: September 9, 2008Publication date: January 1, 2009Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
-
Patent number: 7469318Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. The first data bus can be one of a plurality of data busses in a first data bus set, and the second data bus can be one of a plurality of data busses in a second data bus set.Type: GrantFiled: February 10, 2005Date of Patent: December 23, 2008Assignee: International Business Machines CorporationInventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
-
Patent number: 7467280Abstract: A method for reconfiguring a cache memory is provided. The method in one aspect may include analyzing one or more characteristics of an execution entity accessing a cache memory and reconfiguring the cache based on the one or more characteristics analyzed. Examples of analyzed characteristic may include but are not limited to data structure used by the execution entity, expected reference pattern of the execution entity, type of an execution entity, heat and power consumption of an execution entity, etc. Examples of cache attributes that may be reconfigured may include but are not limited to associativity of the cache memory, amount of the cache memory available to store data, coherence granularity of the cache memory, line size of the cache memory, etc.Type: GrantFiled: July 5, 2006Date of Patent: December 16, 2008Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Balaram Sinharoy, Robert B. Tremaine, Robert W. Wisniewski
-
Publication number: 20080307160Abstract: Methods and associated structures for utilizing write-back cache management modes for local cache memory of disk drives coupled to a storage controller while maintaining data integrity of the data transferred to the local cache memories of affected disk drives. In one aspect hereof, a state machine model of managing cache blocks in a storage controller cache memory maintains blocks in the storage controller's cache memory in a new state until verification is sensed that the blocks have been successfully stored on the persistent storage media of the affected disk drives. Responsive to failure or other reset of the disk drive, the written cache blocks may be re-written from the copy maintained in the cache memory of the storage controller. In another aspect, an alternate controller's cache memory may also be used to mirror the cache blocks from the primary storage controller's cache memory as additional data integrity assurance.Type: ApplicationFiled: August 14, 2008Publication date: December 11, 2008Inventor: Donald R. Humlicek
-
Patent number: 7464223Abstract: A storage system having a cluster configuration that prevents a load from concentrating on a certain storage node and enhances access performance is disclosed. The storage system is provided with plural storage adaptors having a cache memory for storing data read/written according to an I/O request from a host and a device for holding the data stored in the cache memory, means for connecting an external storage having a logical device that handles the read/written data and a cache memory to the storage adaptor, means for monitoring and grasping a usage situation of each cache memory of the plural storage adaptors and means for referring to information of the usage situation of each cache memory acquired by the grasping means and selecting any of the storage adaptors so that usage of each cache memory is equalized, and the logical device of the external storage is controlled by the storage adaptor selected by the selection means via connection means.Type: GrantFiled: January 16, 2007Date of Patent: December 9, 2008Assignee: Hitachi, Ltd.Inventors: Yasuo Watanabe, Yasutomo Yamamoto, Kazuhisa Fujimoto
-
Patent number: 7457922Abstract: In a multiprocessor non-uniform cache architecture system, multiple CPU cores shares one non-uniform cache that can be partitioned into multiple cache portions with varying access latencies. A placement prediction mechanism predicts whether a cache line should remain in a cache portion or migrate to another cache portion. The prediction mechanism maintains one or more prediction counters for each cache line. A prediction counter can be incremented or decremented by a constant or a variable determined by some runtime information, or set to its maximum or minimum value. An effective placement prediction mechanism can reduce average access latencies without causing cache thrashing among cache portions.Type: GrantFiled: November 20, 2004Date of Patent: November 25, 2008Assignee: International Business Machines CorporationInventor: Xiaowei Shen
-
Patent number: 7454580Abstract: A data processing system includes a processor core and a memory subsystem. The memory subsystem includes a store queue having a plurality of entries, where each entry includes an address field for holding the target address of store operation, a data field for holding data for the store operation, and a virtual sync field indicating a presence or absence of a synchronizing operation associated with the entry. The memory subsystem further includes a store queue controller that, responsive to receipt at the memory subsystem of a sequence of operations including a synchronizing operation and a particular store operation, places a target address and data of the particular store operation within the address field and data field, respectively, of an entry in the store queue and sets the virtual sync field of the entry to represent the synchronizing operation, such that a number of store queue entries utilized is reduced.Type: GrantFiled: April 25, 2006Date of Patent: November 18, 2008Assignee: International Business Machines CorporationInventors: Ravi K. Arimilli, Thomas M. Capasso, Robert A. Cargnoni, Guy L. Guthrie, Hugh Shen, William J. Starke
-
Patent number: 7454571Abstract: In some embodiments, a computer system comprises a cache configured to cache data. The computer system is configured to monitor the cache and data that is potentially cacheable in the cache to accumulate a plurality of statistics useable to identify which of a plurality of data lifecycle patterns apply to the data. The computer system is also configured to modify a cache configuration of the cache dependent on which of the plurality of data lifecycle patterns apply to the data.Type: GrantFiled: May 4, 2004Date of Patent: November 18, 2008Assignee: Sun Microsystems, Inc.Inventor: Akara Sucharitakul
-
Publication number: 20080282036Abstract: Techniques for fragmenting a file or a collection of media data are disclosed. According one aspect of the techniques, a file pertaining to a title is fragmented into a header and several tails or segments. The header is a continuous portion of the file while the segments are respective parts of the remaining portion of the file. The header is seeded substantially in all boxes, and none, one or more of the segments are distributed in each of the boxes in service. When a title is ordered, the header is instantly played back while the segments, if not locally available, are continuously fetched respectively from other boxes that have the segments.Type: ApplicationFiled: March 12, 2007Publication date: November 13, 2008Inventor: Prasanna Ganesan
-
Method and apparatus for invalidating cache lines during direct memory access (DMA) write operations
Patent number: 7451248Abstract: A method and apparatus for invalidating cache lines during direct memory access (DMA) write operations are disclosed. Initially, a multi-cache line DMA request is issued by a peripheral device. The multi-cache line DMA request is snooped by a cache memory. A determination is then made as to whether or not the cache memory includes a copy of data stored in the system memory locations to which the multi-cache line DMA request are directed. In response to a determination that the cache memory includes a copy of data stored in the system memory locations to which the multi-cache line DMA request are directed, multiple cache lines within the cache memory are consecutively invalidated.Type: GrantFiled: February 9, 2005Date of Patent: November 11, 2008Assignee: International Business Machines CorporationInventors: George W. Daly, Jr., James S. Fields, Jr. -
Patent number: 7447843Abstract: An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively.Type: GrantFiled: April 21, 2005Date of Patent: November 4, 2008Assignee: Hitachi, Ltd.Inventors: Atushi Ishikawa, Yuko Matsui
-
Publication number: 20080270705Abstract: One embodiment of the present method and apparatus for application-specific dynamic cache placement includes grouping sets of data in a cache memory system into two or more virtual partitions and processing a load/store instruction in accordance with the virtual partitions, where the load/store instruction specifies at least one of the virtual partitions to which the load/store instruction is assigned.Type: ApplicationFiled: June 30, 2008Publication date: October 30, 2008Inventors: KRISHNAN KUNJUNNY KAILAS, Rajiv Alazhath Ravindran, Zehra Sura
-
Publication number: 20080270704Abstract: The embodiments of the invention provide a method, apparatus, etc. for a cache arrangement for improving RAID I/O operations. More specifically, a method begins by partitioning a data object into a plurality of data blocks and creating one or more parity data blocks from the data object. Next, the data blocks and the parity data blocks are stored within storage nodes. Following this, the method caches data blocks within a partitioned cache, wherein the partitioned cache includes a plurality of cache partitions. The cache partitions are located within the storage nodes, wherein each cache partition is smaller than the data object. Moreover, the caching within the partitioned cache only caches data blocks in parity storage nodes, wherein the parity storage nodes comprise a parity storage field. Thus, caching within the partitioned cache avoids caching data blocks within storage nodes lacking the parity storage field.Type: ApplicationFiled: April 30, 2007Publication date: October 30, 2008Inventors: Dingshan He, Deepak R. Kenchammana-Hosekote
-
Patent number: 7444475Abstract: It is an object of the present invention to provide a semiconductor integrated circuit having a chip layout that reduces line length to achieve faster processing. A cache comprises a TAG memory module and a cache data memory module. The cache data memory module is divided into first and second cache data memory modules which are disposed on both sides of the TAG memory module, and input/output circuits of a data TLB are opposed to the input/output circuit of the TAG memory module and the input/output circuits of the first and second cache data memory modules across a bus area to reduce the line length to achieve faster processing.Type: GrantFiled: November 16, 2006Date of Patent: October 28, 2008Assignee: Matsushita Electric Industrial Co., Ltd.Inventor: Masaya Sumita
-
Publication number: 20080263282Abstract: To ensure efficient access to a memory whose writing process is slow. There is provided a storage device for caching data read from a main memory and data to be written in the main memory, comprises a cache memory having a plurality of cache segments, one or more cache segments holding data matching with data in the main memory being set in a protected state to protect the cache segments from a rewrite state, an upper limit of a number of the one or more cache segments being a predetermined reference number; and a cache controller that, in accordance with a write cache miss, allocates a cache segment selected from those cache segments which are not in the protected state to cache write data and writes the write data in the selected cache segment.Type: ApplicationFiled: February 26, 2008Publication date: October 23, 2008Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nobuyuki Harada, Takeo Nakada