Organization And Technology Of Caches (epo) Patents (Class 711/E12.041)
  • Publication number: 20100332762
    Abstract: Methods and apparatus relating to directory cache allocation that is based on snoop response information are described. In one embodiment, an entry in a directory cache may be allocated for an address in response to a determination that another caching agent has a copy of the data corresponding to the address. Other embodiments are also disclosed.
    Type: Application
    Filed: June 30, 2009
    Publication date: December 30, 2010
    Inventors: Adrian C. Moga, Malcolm H. Mandviwalla, Stephen R. Van Doren
  • Publication number: 20100302077
    Abstract: The present invention reduces the number of writes to a main memory to increase useful life of the main memory. To reduce the number of writes to the main memory, data to be written is written to a cache line in a lowest-level cache memory and in a higher-level cache memory(s). If the cache line in the lowest-level cache memory is full, the number of used cache lines in the lowest-level cache reaches a threshold, or there is a need for an empty entry in the lowest-level cache, a processor or a hardware unit compresses content of the cache line and stores the compressed content in the main memory. The present invention also provides LZB algorithm allowing decompression of data from an arbitrary location in compressed data stream with a bound on the number of characters which needs to be processed before a character or string of interest is processed.
    Type: Application
    Filed: June 2, 2009
    Publication date: December 2, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Mohammad Banikazemi, Dan E. Poff
  • Publication number: 20100293332
    Abstract: In response to a request including a state object, which can indicate a state of an enumeration of a cache, the enumeration can be continued by using the state object to identify and send cache data. Also, an enumeration of cache units can be performed by traversing a data structure that includes object nodes, which correspond to cache units, and internal nodes. An enumeration state stack can indicate a current state of the enumeration, and can include state nodes that correspond to internal nodes in the data structure. Additionally, a cache index data structure can include a higher level table and a lower level table. The higher level table can have a leaf node pointing to the lower level table, and the lower level table can have a leaf node pointing to one of the cache units. Moreover, the lower level table can be associated with a tag.
    Type: Application
    Filed: May 21, 2009
    Publication date: November 18, 2010
    Applicant: Microsoft Corporation
    Inventors: Muralidhar Krishnaprasad, Sudhir Mohan Jorwekar, Sharique Muhammed, Subramanian Muralidhar, Anil K. Nori
  • Patent number: 7836256
    Abstract: One embodiment of the present method and apparatus for application-specific dynamic cache placement includes grouping sets of data in a cache memory system into two or more virtual partitions and processing a load/store instruction in accordance with the virtual partitions, where the load/store instruction specifies at least one of the virtual partitions to which the load/store instruction is assigned.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: November 16, 2010
    Assignee: International Business Machines Corporation
    Inventors: Krishnan Kunjunny Kailas, Rajiv Alazhath Ravindran, Zehra Sura
  • Patent number: 7818505
    Abstract: In accordance with some embodiments of the present invention, there is provided a cache management module for managing a cache memory device, comprising a groups management module adapted to define groups of allocation units in accordance with at least an operative criterion and to create a new group of allocation units whenever it is determined that in accordance with at least the operative criterion none of the one or more existing groups is appropriate to include an allocation unit, and a replacement procedure module adapted to manage the cache in terms of groups of allocation units, rather than in terms of discrete allocation units.
    Type: Grant
    Filed: October 4, 2005
    Date of Patent: October 19, 2010
    Assignee: International Business Machines Corporation
    Inventors: Ofir Zohar, Yaron Revah, Haim Helman, Dror Cohen, Shemer Schwartz
  • Publication number: 20100228909
    Abstract: A method for managing data storage is described. The method includes receiving data from an external host at a peripheral storage device, detecting a file system type of the external host, and adapting a caching policy for transmitting the data to a memory accessible by the storage device, wherein the caching policy is based on the detected file system type. The detection of the file system type can be based on the received data. The detection bases can include a size of the received data. In some implementations, the detection of the file system type can be based on accessing the memory for file system type indicators that are associated with a unique file system type. Adapting the caching policy can reduce a number of data transmissions to the memory. The detected file system type can be a file allocation table (FAT) system type.
    Type: Application
    Filed: May 24, 2010
    Publication date: September 9, 2010
    Applicant: APPLE INC.
    Inventors: Michael J. Cornwell, Christopher P. Dudte, Kenneth L. Herman
  • Patent number: 7793044
    Abstract: In accordance with one embodiment, an enhanced chip multiprocessor permits an L1 cache to request ownership of a data line from a shared L2 cache. A determination is made whether to deny or grant the request for ownership based on the sharing of the data line. In one embodiment, the sharing of the data line is determined from an enhanced L2 cache directory entry associated with the data line. If ownership of the data line is granted, the current data line is passed from the shared L2 to the requesting L1 cache and an associated enhanced L1 cache directory entry and the enhanced L2 cache directory entry are updated to reflect the L1 cache ownership of the data line. Consequently, updates of the data line by the L1 cache do not go through the shared L2 cache, thus reducing transaction pressure on the shared L2 cache.
    Type: Grant
    Filed: January 16, 2007
    Date of Patent: September 7, 2010
    Assignee: Oracle America, Inc.
    Inventors: Lawrence A. Spracklen, Yuan C. Chou, Santosh G. Abraham
  • Patent number: 7761662
    Abstract: A cache controller is connected to a processor and a main memory. The cache controller is also connected to a cache memory that can read and write at a speed higher than the main memory. The cache memory is provided with a plurality of cache lines that include a tag area storing an address on the main memory, a capacity area storing a capacity value of a cache block, and a cache block. When a read request is executed from the processor to the main memory, the cache controller checks whether the requested data is present in the cache memory or not. A cache capacity determination unit determines a capacity value for the cache block and supplies to a capacity area.
    Type: Grant
    Filed: December 10, 2008
    Date of Patent: July 20, 2010
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Koichi Fujisaki, Hideo Shimizu
  • Publication number: 20100179865
    Abstract: Music can be broadcast from a radio station and recorded onto a cache of a personal electronic device, such as a portable digital music player. The recording can occur such that there is segmenting of music into different cache portions based upon classification. Instead of playing music from the radio station, music can be played from the cache to ensure high quality and desirable variety. Different rules can be used to govern which music is played as well as how music should be removed from the cache. In addition, targeted advertisements can be used that relate to the music in the cache as well as a user location.
    Type: Application
    Filed: January 9, 2009
    Publication date: July 15, 2010
    Applicant: QUALCOMM Incorporated
    Inventors: Patrik N. Lundqvist, Guilherme K. Hoefel, Robert S. Daley, Jack B. Steenstra
  • Publication number: 20100161873
    Abstract: An apparatus having a memory and a controller is disclosed. The memory may be configured to (i) store a plurality of cache lines, each of the cache line comprising a plurality of locations including a respective end location and (ii) accessing a particular one of the cache lines identified by a cache address signal. The controller may be configured to (i) buffer a plurality of line pointers, each of the line pointers identifying a respective boundary one of the locations in one of the cache lines and (ii) generate the cache address signal in response to a processor address signal hitting a given one of the locations residing between the respective boundary location and the respective end location.
    Type: Application
    Filed: December 18, 2008
    Publication date: June 24, 2010
    Inventors: Yair Orbach, Nahum N. Vishne, Assaf Rachlevski
  • Patent number: 7743211
    Abstract: A storage system 1 includes: plural protocol transformation units 10 that transform, to a protocol within the system, a read/write protocol of data exchanged with servers 3 or hard disk groups 2; plural cache control units 21 that include cache memory units 111 storing data read/written with the servers 3 or the hard disk groups 2 and which include the function of controlling the cache memory units 111; and an interconnection network 31 that connects the protocol transformation units 10 and the cache control units 21. In this storage system 1, the plural cache control units 21 are divided into plural control clusters 70, control of the cache memory units 111 is independent inside the control clusters, and a system management unit 60 that manages, as a single system, the plural protocol transformation units 10 and the plural control clusters 70 is connected to the interconnection network 30.
    Type: Grant
    Filed: June 20, 2008
    Date of Patent: June 22, 2010
    Inventors: Kazuhisa Fujimoto, Mutsumi Hosoya, Kentaro Shimada, Akira Yamamoto, Naoko Iwami, Yasutomo Yamamoto
  • Publication number: 20100100604
    Abstract: A cache configuration management system capable of lightening workloads of estimation of a cache capacity in virtualization apparatus and/or cache assignment is provided. In a storage system having application servers, storage devices, a virtualization apparatus for letting the storage devices be distinctly recognizable as virtualized storages, and a storage management server, the storage management server predicts a response time of the virtualization apparatus with respect to a application server from cache configurations and access performances of the virtualization apparatus and storage device and then evaluates the presence or absence of the assignment to a virtual volume of internal cache and a predictive performance value based on a to-be-assigned capacity to thereby perform judgment of the cache capacity within the virtualization apparatus and estimation of an optimal cache capacity, thus enabling preparation of an internal cache configuration change plan.
    Type: Application
    Filed: January 9, 2009
    Publication date: April 22, 2010
    Inventors: Noriki Fujiwara, Nobuo Beniyama, Hiroshi Nojima
  • Publication number: 20100095067
    Abstract: Methods, apparatus, and products for caching web page elements in accordance with display locations of the elements, including maintaining, by a web browser in accordance with a cache retention policy, a local cache of previously displayed web page elements in dependence upon previous display locations of the elements including maintaining a cache retention score for each locally cached element; and displaying, by the web browser, a previously displayed web page including displaying one or more of the locally cached elements.
    Type: Application
    Filed: October 14, 2008
    Publication date: April 15, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ravi K. Kosaraju, William G. Pagan
  • Publication number: 20100082906
    Abstract: In some embodiments, a processor-based system includes a processor, a system memory coupled to the processor, a mass storage device, a cache memory located between the system memory and the mass storage device, and code stored on the processor-based system to cause the processor-based system to utilize the cache memory. The code may be configured to cause the processor-based system to preferentially use only a selected size of the cache memory to store cache entries having less than or equal to a selected number of cache hits. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 30, 2008
    Publication date: April 1, 2010
    Inventors: Glenn Hinton, Dale Juenemann, R. Scott Tetrick
  • Publication number: 20100049920
    Abstract: A storage system includes a backend storage unit for storing electronic information; a controller unit for controlling reading and writing to the backend storage unit; and at least one of a cache and a non-volatile storage for storing the electronic information during at least one of the reading and the writing; the controller unit executing machine readable and machine executable instructions including instructions for: testing if a frequency of non-volatile storage full condition has occurred one of above and below an upper threshold frequency value and a lower threshold frequency value; if the frequency of the condition has exceeded a threshold frequency value, then calculating a new size; calculating an expected average response time for the new size; comparing actual response time to the expected response time; and one of adjusting and not adjusting a size of the non-volatile storage to minimize the response time.
    Type: Application
    Filed: August 20, 2008
    Publication date: February 25, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lee C. LaFrese, Christopher M. Sansone, Dana F. Scott, Yan Xu, Olga Yiparaki
  • Publication number: 20100037024
    Abstract: A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access).
    Type: Application
    Filed: August 5, 2008
    Publication date: February 11, 2010
    Applicant: Convey Computer
    Inventors: Tony M. Brewer, Terrell Magee, J. Michael Andrewartha
  • Publication number: 20090300288
    Abstract: Systems and methods for pipelined synchronization in a write-combining cache are described herein. An embodiment to transmit data to a memory to enable pipelined synchronization of a cache includes obtaining a plurality of synchronization events for transactions with said memory, calculating one or more matches between said events and said data stored in one or more cache-lines of said cache, storing event time stamps of events associated with said matches, generating one or more priority values based on said event time stamps, concurrently transmitting said data to said memory based on said priority values.
    Type: Application
    Filed: May 28, 2008
    Publication date: December 3, 2009
    Inventors: Laurent LEFEBVRE, Michael Mantor, Robert Hankinson
  • Publication number: 20090292880
    Abstract: A cache memory system controlled by an arbiter includes a memory unit having a cache memory whose capacity is changeable, and an invalidation processing unit that requests invalidation of data stored at a position where invalidation is performed when the capacity of the cache memory is changed in accordance with a change instruction. The invalidation processing unit includes an increasing/reducing processing unit that sets an index to be invalidated in accordance with a capacity before change and a capacity after change and requests the arbiter to invalidate the set index, and an index converter that selects either an index based on the capacity before change or an index based on the capacity after change associated with an access address from the arbiter, and the capacity of the cache memory can be changed while maintaining the number of ways of the cache memory.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 26, 2009
    Inventor: Hiroyuki USUI
  • Publication number: 20090240888
    Abstract: Increasing security for a hand-held data processing device with communication functionality where such a device includes an access-ordered memory cache relating to communications carried out by the device. The hand-held data processing device has a locked state that is entered by the device receiving or initiating a trigger. On occurrence of the trigger to enter the locked state the memory cache is reordered so as to disrupt the access-ordering of the cache to obscure device traffic information and thus increase the security of the device in the locked state.
    Type: Application
    Filed: June 1, 2009
    Publication date: September 24, 2009
    Applicant: RESEARCH IN MOTION LIMITED
    Inventors: Michael K. Brown, Herbert A. Little, Michael S. Brown
  • Publication number: 20090228658
    Abstract: A management method for a cache memory of a storage apparatus including: at least one volume for storing data to be accessed from a computer via a network; and a cache memory, to which an area for holding the data to be stored in said at least one volume is allocated for every said at least one volume, including the steps of: referring to a relation between predetermined volumes; and allocating an area, in which the data to be stored in a volume are held, to said volume on the basis of the relation between said volumes.
    Type: Application
    Filed: May 19, 2009
    Publication date: September 10, 2009
    Inventors: Takayuki NAGAI, Masayuki Yamamoto, Masayasu Asano
  • Publication number: 20090204768
    Abstract: A runtime code manipulation system is provided that supports code transformations on a program while it executes. The runtime code manipulation system uses code caching technology to provide efficient and comprehensive manipulation of an application running on an operating system and hardware. The code cache includes a system for automatically keeping the code cache at an appropriate size for the current working set of an application running.
    Type: Application
    Filed: December 30, 2008
    Publication date: August 13, 2009
    Inventors: Derek L. Bruening, Saman P. Amarasinghe
  • Publication number: 20090198911
    Abstract: According to method of data processing in a multiprocessor data processing system, in response to a processor request to modify a target granule of a target cache line of data containing multiple granules, a processing unit originates on an interconnect of the multiprocessor data processing system a data-claim-partial request that requests permission to promote only the target granule of the target cache line to a unique copy with an intent to modify the target granule. In response to a combined response to the data-claim-partial request indicating success (the combined response representing a system-wide response to the data-claim-partial-request), the processing unit promotes only the target granule of the target cache line to a unique copy by updating a coherency state of the target granule and retaining a coherency state of at least one other granule of the target cache line.
    Type: Application
    Filed: February 1, 2008
    Publication date: August 6, 2009
    Inventors: LAKSHMINARAYANA B. ARIMILLI, RAVI K. ARIMILLI, JERRY D. LEWIS, WARREN E. MAULE
  • Publication number: 20090182947
    Abstract: Embodiments of the present invention provide methods and systems for tuning the size of the cache. In particular, when a page fault occurs, non-resident page data is checked to determine if that page was previously accessed. If the page is found in the non-resident page data, an inter-reference distance for the faulted page is determined and the distance of the oldest resident page is determined. The size of the cache may then be tuned based on comparing the inter-reference distance of the newly faulted page relative to the distance of the oldest resident page.
    Type: Application
    Filed: March 20, 2009
    Publication date: July 16, 2009
    Applicant: RED HAT, INC.
    Inventor: Henri Han van RIEL
  • Publication number: 20090172449
    Abstract: Disclosed herein are approaches to reducing a guardband (margin) used for minimum voltage supply (Vcc) requirements for memory such as cache.
    Type: Application
    Filed: December 26, 2007
    Publication date: July 2, 2009
    Inventors: Ming Zhang, Chris Wilkerson, Greg Taylor, Randy J. Aksamlt, James Tschanz
  • Publication number: 20090157979
    Abstract: A method, system, and computer program product for target computer processor unit (CPU) determination during cache injection using I/O hub/chipset resources are provided. The method includes creating a cache injection indirection table on the input/output (I/O) hub or chipset. The cache injection indirection table includes fields for address or address range, CPU identifier, and cache type. In response to receiving an input/output (I/O) transaction, the hub/chipset reads the address in an address field of the I/O transaction, looks up the address in the cache injection indirection table, and injects the address and data of the I/O transaction to a target cache associated with a CPU as identified in the CPU identifier field when, in response to the look up, the address is present in the address field of the cache injection indirection table.
    Type: Application
    Filed: December 18, 2007
    Publication date: June 18, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas A. Gregg, Rajaram B. Krishnamurthy
  • Publication number: 20090157978
    Abstract: A method, system, and computer program product for target computer processor unit (CPU) determination during cache injection using input/output (I/O) adapter resources are provided. The method includes storing locations of cache lines for pinned or affinity scheduled processes in a table on an input/output (I/O) adapter. The method also includes setting a cache injection hint in an input/output (I/O) transaction when an address in the I/O transaction is found in the table. The cache injection hint is set for performing direct cache injection. The method further includes entering a central processing unit (CPU) identifier and cache type in the I/O transaction, and updating a cache by injecting data values of the I/O transaction into the cache as determined by the CPU identifier and the cache type associated with the address in the table.
    Type: Application
    Filed: December 18, 2007
    Publication date: June 18, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas A. Gregg, Rajaram B. Krishnamurthy
  • Publication number: 20090157977
    Abstract: A method, system, and computer program product for data transfer to memory over an input/output (I/O) interconnect are provided. The method includes reading a mailbox stored on an I/O adapter in response to a request to initiate an I/O transaction. The mailbox stores a directive that defines a condition under which cache injection for data values in the I/O transaction will not be performed. The method also includes embedding a hint into the I/O transaction when the directive in the mailbox matches data received in the request, and executing the I/O transaction. The execution of the I/O transaction causes a system chipset or I/O hub for a processor receiving the I/O transaction, to directly store the data values from the I/O transaction into system memory and to suppress the cache injection of the data values into a cache memory upon presence of the hint in a header of the I/O transaction.
    Type: Application
    Filed: December 18, 2007
    Publication date: June 18, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas A. Gregg, Rajaram B. Krishnamurthy
  • Publication number: 20090144506
    Abstract: A method for implementing dynamic refresh protocols for DRAM based cache includes partitioning a DRAM cache into a refreshable portion and a non-refreshable portion, and assigning incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines. Cache lines corresponding to data having a usage history below a defined frequency are assigned to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
    Type: Application
    Filed: December 4, 2007
    Publication date: June 4, 2009
    Inventors: John E. Barth, JR., Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Publication number: 20090049248
    Abstract: A method and computer system for reducing the wiring congestion, required real estate, and access latency in a cache subsystem with a sectored and sliced lower cache by re-configuring sector-to-slice allocation and the lower cache addressing scheme. With this allocation, sectors having discontiguous addresses are placed within the same slice, and a reduced-wiring scheme is possible between two levels of lower caches based on this re-assignment of the addressable sectors within the cache slices. Additionally, the lower cache effective address tag is re-configured such that the address fields previously allocated to identifying the sector and the slice are switched relative to each other's location within the address tag. This re-allocation of the address bits enables direct slice addressing based on the indicated sector.
    Type: Application
    Filed: August 16, 2007
    Publication date: February 19, 2009
    Inventors: Leo James Clark, James Stephen Fields, JR., Guy Lynn Guthrie, William John Starke, Derek Edward Williams, Phillip G. Williams
  • Publication number: 20090024797
    Abstract: The present invention proposes a novel cache residence prediction mechanism that predicts whether requested data of a cache miss can be found in another cache. The memory controller can use the prediction result to determine if it should immediately initiate a memory access, or initiate no memory access until a cache snoop response shows that the requested data cannot be supplied by a cache. The cache residence prediction mechanism can be implemented at the cache side, the memory side, or both. A cache-side prediction mechanism can predict that data requested by a cache miss can be found in another cache if the cache miss address matches an address tag of a cache line in the requesting cache and the cache line is in an invalid state. A memory-side prediction mechanism can make effective prediction based on observed memory and cache operations that are recorded in a prediction table.
    Type: Application
    Filed: July 18, 2007
    Publication date: January 22, 2009
    Inventors: Xiaowei Shen, Jaehyuk Huh, Balaram Sinharoy
  • Publication number: 20080320232
    Abstract: A processor prevents writeback race condition errors by maintaining responsibility for data until the writeback request is confirmed by an intervention message from a cache coherency manager. If a request for the same data arrives before the intervention message, the processor core unit provides the requested data and cancels the pending writeback request. The cache coherency data associated with cache lines indicates whether a request for data has been received prior to the intervention message associated with the writeback request. The cache coherency data of a cache line has a value of “modified” when the writeback request is initiated. When the intervention message associated with the writeback request is received, the cache lines's cache coherency data is examined. A change in the cache coherency data from the value of “modified” indicates that the request for data has been received prior to the intervention and the writeback request should be cancelled.
    Type: Application
    Filed: June 22, 2007
    Publication date: December 25, 2008
    Applicant: MIPS TECHNOLOGIES INC.
    Inventors: Sanjay Vishin, Adam Stoler
  • Publication number: 20080307423
    Abstract: A system includes a task scheduler (301) comprising a task execution schedule (101) for a plurality of tasks to be executed on a plurality of cache lines in a cache memory. The system also includes a cache controller logic (303) having a voltage scalar register (305). The voltage scalar register (305) is updated by the task scheduler with a task identifier (204) of a next task to be executed. The system has a voltage scalar (304), wherein the voltage scalar (304) selects one or more cache lines to operate in a low power mode based on the task execution schedule (101). The task execution schedule (101) is stored in a look up table.
    Type: Application
    Filed: December 20, 2006
    Publication date: December 11, 2008
    Applicant: NXP B.V.
    Inventor: Sainath Karlapalem
  • Publication number: 20080270708
    Abstract: A system and method are disclosed for achieving cache coherency in a multiprocessor computer system having a plurality of sockets with processing devices and memory controllers and a plurality of memory blocks. In at least some embodiments, the system includes a plurality of node controllers capable of being respectively coupled to the respective sockets of the multiprocessor computer, a plurality of caching devices respectively coupled to the respective node controllers, and a fabric coupling the respective node controllers, by which cache line request signals can be communicated between the respective node controllers. Cache coherency is achieved notwithstanding the cache line request signals communicated between the respective node controllers due at least in part to communications between the node controllers and the respective caching devices to which the node controllers are coupled.
    Type: Application
    Filed: April 30, 2007
    Publication date: October 30, 2008
    Inventors: Craig Warner, Bryan Hornung, Chris Michael Brueggen, Ryan L. Akkerman, Michael K. Dugan, Gary Gostin, Harvey Ray, Dan Robinson, Christopher Greer
  • Publication number: 20080263278
    Abstract: A method for reconfiguring a cache memory is provided. The method in one aspect may include analyzing one or more characteristics of an execution entity accessing a cache memory and reconfiguring the cache based on the one or more characteristics analyzed. Examples of analyzed characteristic may include but are not limited to data structure used by the execution entity, expected reference pattern of the execution entity, type of an execution entity, heat and power consumption of an execution entity, etc. Examples of cache attributes that may be reconfigured may include but are not limited to associativity of the cache memory, amount of the cache memory available to store data, coherence granularity of the cache memory, line size of the cache memory, etc.
    Type: Application
    Filed: May 30, 2008
    Publication date: October 23, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiaowei Shen, Balaram Sinharoy, Robert B. Tremaine, Robert W. Wisniewski
  • Publication number: 20080229018
    Abstract: A save data discrimination method saves calculation results including an element which is periodically saved when a computer executes a program repeating the same arithmetic process. The method includes analyzing a loop structure of the program from a source code of the program to detect a main loop of the arithmetic process repeated in the program and a sub-loop included in the main loop, determining a point of entrance to the main loop as a checkpoint that is a point for saving data of the calculation results, and analyzing the contents of the arithmetic process described in the main loop to identify reference-first elements which are elements only referred to and elements defined after being referred to as data to be saved at the checkpoint determined at the point of entrance.
    Type: Application
    Filed: March 10, 2008
    Publication date: September 18, 2008
    Applicant: Fujitsu Limited
    Inventor: Akira Hosoi
  • Publication number: 20080177952
    Abstract: According to the methods and apparatus taught herein, processor caching policies are determined using cache policy information associated with a target memory device accessed during a memory operation. According to one embodiment of a processor, the processor comprises at least one cache and a memory management unit. The at least one cache is configured to store information local to the processor. The memory management unit is configured to set one or more cache policies for the at least one cache. The memory management unit sets the one or more cache policies based on cache policy information associated with one or more target memory devices configured to store information used by the processor.
    Type: Application
    Filed: January 24, 2007
    Publication date: July 24, 2008
    Inventor: Michael William Morrow
  • Publication number: 20080133839
    Abstract: Cache management strategies are described for retrieving information from a storage medium, such as an optical disc, using a cache memory including multiple cache segments. A first group of cache segments can be devoted to handling the streaming transfer of a first type of information, and a second group of cache segments can be devoted to handling the bulk transfer of a second type of information. A host system can provide hinting information that identifies which group of cache segments that a particular read request targets. A circular wrap-around fill strategy can be used to iteratively supply new information to the cache segments upon cache hits by performing pre-fetching. Various eviction algorithms can be used to select a cache segment for flushing and refilling upon a cache miss, such as a least recently used (LRU) algorithm or a least frequently used (LFU) algorithm.
    Type: Application
    Filed: February 11, 2008
    Publication date: June 5, 2008
    Applicant: Microsoft Corporation
    Inventors: Brian L. Schmidt, Jonathan E. Lange, Timothy R. Osborne