Combined Replacement Modes Patents (Class 711/134)
  • Patent number: 8423719
    Abstract: An apparatus includes a processor which issues a plurality of commands including an identifier for classifying each of the commands, a cache memory which includes a plurality of ways to store a data corresponding to a command, wherein the cache memory includes a register to store the identifier, the register corresponding to at least one of the ways being fixed, the fixed way exclusively storing the data corresponding to the identifier during which the register stores the identifier, a replacement controller which selects a replacement way based on a predetermined replacement algorithm in case of a cache miss, and excludes the fixed way from a candidate of the replacement way when the register corresponding to the fixed way stores the identifier.
    Type: Grant
    Filed: September 8, 2008
    Date of Patent: April 16, 2013
    Assignee: NEC Corporation
    Inventor: Koji Kobayashi
  • Patent number: 8417903
    Abstract: Disclosed is a computer implemented method, computer program product, and apparatus for maintaining a preselect list. The method comprises software components detecting a page fault of a memory page. In response to detecting a page fault, the software components determine whether the memory page is referenced in the preselect list and unhide the memory page. Upon determining whether the memory page is referenced in the preselect list, the software components remove an entry of the preselect list corresponding to the memory page to form at least one removed candidate page and skip paging-out of the at least one removed candidate page.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: April 9, 2013
    Assignee: International Business Machines Corporation
    Inventors: Abraham Alvarez, Andrew Dunshea, Douglas J. Griffith
  • Patent number: 8417892
    Abstract: Systems, methods and a computer program product the differential storage and eviction for information resources from a browser cache. In an embodiment, the present invention provides differential storage and eviction for information resources by storing fetched resources in a memory and assigning, with a processor, a persistence score to the resources. Further embodiments relocate the resources from a sub-cache to a different sub-cache based on their persistence score, and remove the resource from the memory based on the persistence score.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: April 9, 2013
    Assignee: Google Inc.
    Inventors: James Roskind, Jose Ricardo Vargas Puentes, Ashit Kumar Jain, Evan Martin
  • Patent number: 8407421
    Abstract: An apparatus and method is described herein for intelligently spilling cache lines. Usefulness of cache lines previously spilled from a source cache is learned, such that later evictions of useful cache lines from a source cache are intelligently selected for spill. Furthermore, another learning mechanism—cache spill prediction—may be implemented separately or in conjunction with usefulness prediction. The cache spill prediction is capable of learning the effectiveness of remote caches at holding spilled cache lines for the source cache. As a result, cache lines are capable of being intelligently selected for spill and intelligently distributed among remote caches based on the effectiveness of each remote cache in holding spilled cache lines for the source cache.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 26, 2013
    Assignee: Intel Corporation
    Inventors: Simon C. Steely, Jr., William C. Hasenplaugh, Aamer Jaleel, George Z. Chrysos
  • Patent number: 8392384
    Abstract: A system, method, and medium for dynamically scaling the size of a fingerprint index in a deduplication storage system. Fingerprints are stored as entries in a fingerprint index, and the fingerprint index is scaled to fit into an in-memory cache to enable fast accesses to the index. A persistent copy of the full fingerprint index is stored on a non-volatile memory. The cached fingerprint index uses binary sampling to categorize half of the fingerprint entries as samples and protected, and the other half of the entries as non-samples and replaceable. When a search of the cached index results in a hit on a sample entry, all of the non-sample entries associated with the same container are copied from the persistent index to the cached index.
    Type: Grant
    Filed: December 10, 2010
    Date of Patent: March 5, 2013
    Assignee: Symantec Corporation
    Inventors: Weibao Wu, Viswesvaran Janakiraman
  • Patent number: 8392662
    Abstract: A data management method includes assigning data buffered in a first memory device into at least two different groups for transfer to a second memory device. At least one of the different groups has at least two units of the data assigned thereto. The data is transferred from the first memory device to the second memory device in a sequence according to a respective priority associated with each of the different groups and in group-by-group manner such that units of the data assigned to a group having a higher priority are transferred to the second memory device prior to units of the data assigned to a group having a lower priority. Related systems and methods are also discussed.
    Type: Grant
    Filed: June 11, 2009
    Date of Patent: March 5, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jun-Ho Jang, Jin-Hwa Lee, Woon-Jae Chung, Sang-Hoon Choi, Nam-Hoon Kim
  • Patent number: 8386708
    Abstract: A method for metadata management in a storage system configured for supporting sub-LUN tiering. The method may comprise providing a metadata queue of a specific size; determining whether the metadata for a particular sub-LUN is cached in the metadata queue; updating the metadata for the particular sub-LUN when the metadata for the particular sub-LUN is cached in the metadata queue; inserting the metadata for the particular sub-LUN to the metadata queue when the metadata queue is not full and the metadata is not cached; replacing an entry in the metadata queue with the metadata for the particular sub-LUN when the metadata queue is full and the metadata is not cached; and identifying at least one frequently accessed sub-LUN for moving to a higher performing tier in the storage system, the at least one frequently accessed sub-LUN being identified based on the metadata cached in the metadata queue.
    Type: Grant
    Filed: September 21, 2010
    Date of Patent: February 26, 2013
    Assignee: LSI Corporation
    Inventor: Martin Jess
  • Patent number: 8380948
    Abstract: Memory objects associated with a portion of a cache (e.g., data blocks of a media file) are assigned a value based on their importance to an application that is consuming memory objects. The values are used to assign the data blocks to purge groups. The purge groups are a labeling mechanism for determining a purge order. A memory object associated with a first data block assigned to a first purge group may be purged before a memory object associated with a second data block assigned to a second purge group. As new data blocks are received by the application (e.g., from disk or a network connection), the blocks are assigned a value and added to a purge group. In some cases, the data blocks arrive out of order (e.g., order of consumption). Memory objects can be reassigned to a different purge group when new data blocks are added or reclaimed.
    Type: Grant
    Filed: September 4, 2008
    Date of Patent: February 19, 2013
    Assignee: Apple Inc.
    Inventors: Heiko Gernot Albert Panther, James Michael Magee, John Samuel Bushell
  • Patent number: 8370580
    Abstract: Techniques for directory server integration are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for directory server integration comprising setting one or more parameters determining a range of permissible expiration times for a plurality of cached directory entries, creating, in electronic storage, a cached directory entry from a directory server, assigning a creation time to the cached directory entry, and assigning at least one random value to the cached directory entry, the random value determining an expiration time for the cached directory entry within the range of permissible expiration times, wherein randomizing the expiration time for the cached directory entry among the range of permissible expiration times for a plurality of cached directory entries reduces an amount of synchronization required between cache memory and the directory server at a point in time.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: February 5, 2013
    Assignee: Symantec Corporation
    Inventors: Ayman Mobarak, Nathan Moser, Chad Jamart
  • Patent number: 8364907
    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.
    Type: Grant
    Filed: January 27, 2012
    Date of Patent: January 29, 2013
    Assignee: Apple Inc.
    Inventors: Ramesh Gunna, Sudarshan Kadambi
  • Patent number: 8364898
    Abstract: A method and a system for utilizing less recently used (LRU) bits and presence bits in selecting cache-lines for eviction from a lower level cache in a processor-memory sub-system. A cache back invalidation (CBI) logic utilizes LRU bits to evict only cache-lines within a LRU group, following a cache miss in the lower level cache. In addition, the CBI logic uses presence bits to (a) indicate whether a cache-line in a lower level cache is also present in a higher level cache and (b) evict only cache-lines in the lower level cache that are not present in a corresponding higher level cache. However, when the lower level cache-line selected for eviction is also present in any higher level cache, CBI logic invalidates the cache-line in the higher level cache. The CBI logic appropriately updates the values of presence bits and LRU bits, following evictions and invalidations.
    Type: Grant
    Filed: January 23, 2009
    Date of Patent: January 29, 2013
    Assignee: International Business Machines Corporation
    Inventors: Ganesh Balakrishnan, Anil Krishna
  • Patent number: 8356141
    Abstract: A replacement memory page is identified by accessing a first list of page records, and if the first list is not empty, identifying a replacement page from a next page record indicator of the first list. A second list of page records is accessed if the first list is empty, and if the second list is not empty, the replacement page is identified from a next page record indicator of the second list. A third list of page records is accessed if the first and second lists are empty, and the replacement page is identified from a next page record indicator of the third list.
    Type: Grant
    Filed: June 28, 2010
    Date of Patent: January 15, 2013
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Prashanth Madisetti, Dan Truong, Srisailendra Yallapragada
  • Patent number: 8341358
    Abstract: One embodiment of the invention sets forth a mechanism for efficiently write dirty data from the L2 cache to a DRAM. A dirty data notification, including a memory address of the dirty data, is transmitted by the L2 cache to a frame buffer logic when dirty data is stored in the L2 cache. The frame buffer logic uses a page-stream sorter to organize dirty data notifications based on the bank page associated with the memory addresses included in the dirty data notifications. The page-stream sorter includes multiple sets with entries that may be associated with different bank pages in the DRAM. The frame buffer logic transmits dirty data associated with an entry that has a maximum threshold of dirty data notifications to the DRAM. The frame buffer logic also transmits dirty data associated with the oldest entry when the number of entries in a set reaches a maximum threshold.
    Type: Grant
    Filed: September 18, 2009
    Date of Patent: December 25, 2012
    Assignee: NVIDIA Corporation
    Inventors: John H. Edmondson, James Roberts
  • Patent number: 8341350
    Abstract: A method for metadata management in a storage system may include providing a metadata queue of a maximum size; determining whether the metadata for a particular sub-LUN is held in the metadata queue; updating the metadata for the particular sub-LUN when the metadata for the particular sub-LUN is held in the metadata queue; inserting the metadata for the particular sub-LUN at the head of the metadata queue when the metadata queue is not full and the metadata is not held in the metadata queue; replacing an entry in the metadata queue with the metadata for the particular sub-LUN and moving the metadata to the head of the metadata queue when the metadata queue is full and the metadata is not held in the metadata queue; and controlling the number of sub-LUNs in the storage system to manage data accessed with respect to an amount of available data storage.
    Type: Grant
    Filed: February 3, 2011
    Date of Patent: December 25, 2012
    Assignee: LSI Corporation
    Inventors: Martin Jess, Brian McKean
  • Patent number: 8307165
    Abstract: One embodiment of the invention sets forth a mechanism for increasing the number of read commands or write commands transmitted to an activated bank page in the DRAM. Read requests and dirty notifications are organized in a read request sorter or a dirty notification sorter, respectively, and each sorter includes multiple sets with entries that may be associated with different bank pages in the DRAM. Read requests and dirty notifications are stored in read request lists and dirty notification lists, where each list is associated with a specific bank page. When a bank page is activated to process read requests, read commands associated with read requests stored in a particular read request list are transmitted to the bank page. When a bank page is activated to process dirty notifications, write commands associated with dirty notifications stored in a particular dirty notification list are transmitted to the bank page.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: November 6, 2012
    Assignee: Nvidia Corporation
    Inventors: Shane Keil, John H. Edmondson, Sean J. Treichler
  • Patent number: 8301842
    Abstract: An apparatus for allocating entries in a set associative cache memory includes an array that provides a first pseudo-least-recently-used (PLRU) vector in response to a first allocation request from a first functional unit. The first PLRU vector specifies a first entry from a set of the cache memory specified by the first allocation request. The first vector is a tree of bits comprising a plurality of levels. Toggling logic receives the first vector and toggles predetermined bits thereof to generate a second PLRU vector in response to a second allocation request from a second functional unit generated concurrently with the first allocation request and specifying the same set of the cache memory specified by the first allocation request. The second vector specifies a second entry different from the first entry from the same set. The predetermined bits comprise bits of a predetermined one of the levels of the tree.
    Type: Grant
    Filed: July 6, 2010
    Date of Patent: October 30, 2012
    Assignee: VIA Technologies, Inc.
    Inventors: Colin Eddy, Rodney E. Hooker
  • Patent number: 8281087
    Abstract: Provided are a method, system, and program for receiving a request to remove a record. A determination is made as to whether a state associated with the record includes at least one hold state and whether the state associated with the record includes at least a retention period that has not expired. The request to remove the record is denied in response to determining that the state associated with the record includes at least one of at least one hold state and one retention period that has not expired.
    Type: Grant
    Filed: January 7, 2009
    Date of Patent: October 2, 2012
    Assignee: Google Inc.
    Inventors: Alan Stuart, Toby Lyn Marek, Avishai Haim Hochberg, David Maxwell Cannon, Howard Newton Martin
  • Patent number: 8271750
    Abstract: A data processing system includes a data store having storage locations storing entries which can be used for a variety of purposes, such as operand value prediction, branch prediction, etc. An entry profile store stores profile data for more candidate entries than there are storage locations within the data store. The profile data is used to determine replacement policy for entries within the data store. The profile data can include hash values used to determine whether predictions associated with candidate entries were correct without having to store the full predictions within the profile data.
    Type: Grant
    Filed: January 18, 2008
    Date of Patent: September 18, 2012
    Assignee: ARM Limited
    Inventors: Sami Yehia, Marios Kleanthous
  • Patent number: 8261022
    Abstract: A method and apparatus are disclosed for locking the most recently accessed frames in a cache memory. The most recently accessed frames in a cache memory are likely to be accessed by a task again in the near future. The most recently used frames may be locked at the beginning of a task switch or interrupt to improve the performance of the cache. The list of most recently used frames is updated as a task executes and may be embodied, for example, as a list of frames addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold.
    Type: Grant
    Filed: October 9, 2001
    Date of Patent: September 4, 2012
    Assignee: Agere Systems Inc.
    Inventors: Harry Dwyer, John Susantha Fernando
  • Patent number: 8214601
    Abstract: The present invention provides a system with a cache that indicates which, if any, of its sections contain data having spent status. The invention also provides a method for identifying cache sections containing data having spent status and then purging without writing back to main memory a cache line having at least one section containing data having spent status. The invention further provides a program that specifies a cache-line section containing data that is to acquire “spent” status. “Spent” data, herein, is useless modified or unmodified data that was formerly at least potentially useful data when it was written to a cache. “Purging” encompasses both invalidating and overwriting.
    Type: Grant
    Filed: July 30, 2004
    Date of Patent: July 3, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Dale Morris, Robert S. Schreiber
  • Patent number: 8200903
    Abstract: Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.
    Type: Grant
    Filed: August 20, 2008
    Date of Patent: June 12, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Blaine D Gaither
  • Patent number: 8180968
    Abstract: The invention relates to a method for reducing cache flush time of a cache in a computer system. The method includes populating at least one of a plurality of directory entries of a dirty line directory based on modification of the cache to form at least one populated directory entry, and de-populating a pre-determined number of the plurality of directory entries according to a dirty line limiter protocol causing a write-back from the cache to a main memory, where the dirty line limiter protocol is based on a number of the at least one populated directory entry exceeding a pre-defined limit.
    Type: Grant
    Filed: March 28, 2007
    Date of Patent: May 15, 2012
    Assignee: Oracle America, Inc.
    Inventors: Brian W. O'Krafka, Roy S. Moore, Pranay Koka
  • Patent number: 8180969
    Abstract: A cache stores information in each of a plurality of cache lines. Addressing circuitry receives memory addresses for comparison with multiple ways of stored addresses to determine a hit condition representing a match of a stored address and a received address. A pseudo least recently used (PLRU) tree circuit stores one or more states of a PLRU tree and implements a tree having a plurality of levels beginning with a root and indicates one of a plurality of ways in the cache. Each level has one or more nodes. Multiple nodes within a same level are child nodes to a parent node of an immediately higher level. PLRU update circuitry that is coupled to the addressing circuitry and the PLRU tree circuit receives lock information to lock one or more lines of the cache and prevent a PLRU tree state from selecting a locked line.
    Type: Grant
    Filed: January 15, 2008
    Date of Patent: May 15, 2012
    Assignee: Freescale Semiconductor, Inc.
    Inventor: William C. Moyer
  • Patent number: 8176251
    Abstract: The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device.
    Type: Grant
    Filed: August 5, 2008
    Date of Patent: May 8, 2012
    Assignee: Network Appliance, Inc.
    Inventors: Naveen Bali, Naresh Patel, Yasuhiro Endo
  • Patent number: 8161242
    Abstract: Improving cache performance in a data processing system is provided. A cache controller monitors a counter associated with a cache. The cache controller determines whether the counter indicates that a plurality of non-dedicated cache sets within the cache should operate as spill cache sets or receive cache sets. The cache controller sets the plurality of non-dedicated cache sets to spill an evicted cache line to an associated cache set in another cache in the event of a cache miss in response to an indication that the plurality of non-dedicated cache sets should operate as the spill cache sets. The cache controller sets the plurality of non-dedicated cache sets to receive an evicted cache line from another cache set in the event of the cache miss in response to an indication that the plurality of non-dedicated cache sets should operate as the receive cache sets.
    Type: Grant
    Filed: August 1, 2008
    Date of Patent: April 17, 2012
    Assignee: International Business Machines Corporation
    Inventor: Moinuddin K. Qureshi
  • Patent number: 8151058
    Abstract: A vector computer system includes a vector processor configured to issue a vector store instruction which includes a plurality of store requests; a cache memory of a write back system provided between the vector processor and a main memory; and a write allocate determining section configured to generate an allocation control signal which specifies whether the cache memory operates based on a write allocate system or a non-write allocate system. When the vector processor issues the vector store instruction, the write allocate determining section generates the allocation control signal to each of the plurality of store requests based on a write pattern as a pattern of target addresses of the plurality of store requests. The cache memory executes each store request based on one of the write allocate system and the non-write allocate system which is specified based on the allocation control signal.
    Type: Grant
    Filed: October 5, 2009
    Date of Patent: April 3, 2012
    Assignee: NEC Corporation
    Inventor: Koji Kobayashi
  • Publication number: 20120079206
    Abstract: Embodiments of the present invention provide a method, an apparatus, and a proxy server for selecting cache replacement policies to reduce manual participation and switch cache replacement policies automatically. The method includes: obtaining statistical data of multiple cache replacement policies that are running simultaneously; and switching, according to an event of policy decision for cache replacement policies and the statistical data, an active cache replacement policy to a cache replacement policy that complies with a policy decision requirement. The automatic switching of cache replacement policies lowers the technical requirements on administrators. In addition, in the operation process of a proxy cache, a cache replacement policy that is applicable to a current scenario and meets a performance expectation of a user can be selected automatically, so as to make the technical solution feature good adaptability.
    Type: Application
    Filed: August 11, 2011
    Publication date: March 29, 2012
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yuping ZHAO, Hanyu WEI, Hao WANG, Jian CHEN
  • Patent number: 8145859
    Abstract: Techniques for managing memory usage of a processing system by spilling data from a memory to a persistent store based upon an evict policy are provided. A triggering event is detected. In response to the triggering event and based on the evict policy, it is determined whether data from the memory of the processing system is to be spilled to the persistent storage. The determination is made by comparing a level of free memory of the processing system with a threshold specified by the evict policy. The data is evicted from the memory.
    Type: Grant
    Filed: March 2, 2009
    Date of Patent: March 27, 2012
    Assignee: Oracle International Corporation
    Inventors: Hoyong Park, Namit Jain, Anand Srinivasan, Shailendra Mishra
  • Patent number: 8131946
    Abstract: In one embodiment, a processor may be configured to write ECC granular stores into the data cache, while non-ECC granular stores may be merged with cache data in a memory request buffer. In one embodiment, a processor may be configured to detect that a victim block writeback hits one or more stores in a memory request buffer (or vice versa) and may convert the victim block writeback to a fill. In one embodiment, a processor may speculatively issue stores that are subsequent to a load from a load/store queue, but prevent the update for the stores in response to a snoop hit on the load.
    Type: Grant
    Filed: October 20, 2010
    Date of Patent: March 6, 2012
    Assignee: Apple Inc.
    Inventors: Ramesh Gunna, Sudarshan Kadambi
  • Publication number: 20120047331
    Abstract: Systems and methods for managing a storage device are disclosed. Generally, in a host to which a storage device is operatively coupled, wherein the storage device includes a cache for storing one or more discardable files, a file is identified to be uploaded to an external location. A determination is made whether sufficient free space exists in the cache to pre-stage the file for upload to the external location and the file is stored in the cache upon determining that sufficient free space exists in the cache to pre-stage the file for upload to the external location, wherein pre-stating prepares a file for opportunistically uploading such file in accordance with an uploading policy.
    Type: Application
    Filed: September 30, 2010
    Publication date: February 23, 2012
    Inventors: Joseph R. Meza, Judah Gamliel Hahn, Henry Hutton, Leah Sherry
  • Patent number: 8122197
    Abstract: A method and apparatus for managing coherence between two processors of a two processor node of a multi-processor computer system. Generally the present invention relates to a software algorithm that simplifies and significantly speeds the management of cache coherence in a message passing parallel computer, and to hardware apparatus that assists this cache coherence algorithm. The software algorithm uses the opening and closing of put/get windows to coordinate the activated required to achieve cache coherence. The hardware apparatus may be an extension to the hardware address decode, that creates, in the physical memory address space of the node, an area of virtual memory that (a) does not actually exist, and (b) is therefore able to respond instantly to read and write requests from the processing elements.
    Type: Grant
    Filed: August 19, 2009
    Date of Patent: February 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Dirk Hoenicke, Martin Ohmacht
  • Patent number: 8117397
    Abstract: A cache memory includes a cache array including a plurality of congruence classes each containing a plurality of cache lines, where each cache line belongs to one of multiple classes which include at least a first class and a second class. The cache memory also includes a cache directory of the cache array that indicates class membership. The cache memory further includes a cache controller that selects a victim cache line for eviction from a congruence class. If the congruence class contains a cache line belonging to the second class, the cache controller preferentially selects as the victim cache line a cache line of the congruence class belonging to the second class based upon access order. If the congruence class contains no cache line belonging to the second class, the cache controller selects as the victim cache line a cache line belonging to the first class based upon access order.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: February 14, 2012
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Thomas L. Jeremiah, William L. McNeil, Piyush C. Patel, William J. Starke, Jeffrey A. Stuecheli
  • Patent number: 8108614
    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: January 31, 2012
    Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin
  • Patent number: 8108613
    Abstract: Provided are a method, system, and article of manufacture, wherein a request to write data to a storage medium is received. The data requested to be written to the storage medium is stored in a cache. A writing of the data is initiated to the storage medium. A periodic determination is made as to whether the stored data in the cache is the same as the data written to the storage medium.
    Type: Grant
    Filed: December 4, 2007
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: William John Durica, M. Amine Hajji, Joseph Smith Hyde, II, Ronald J. Venturi
  • Patent number: 8108198
    Abstract: A system and method are disclosed to trace memory in a hardware emulator. In one aspect, a first Random Access Memory is used to store data associated with a user design during emulation. At any desired point in time, the contents of the first Random Access Memory are captured in a second Random Access Memory. After the capturing, the contents of the second Random Access Memory are copied to a visibility system. During the copying, the user design may modify the data in the first Random Access Memory while the captured contents within the second Random Access Memory remain unmodifiable so that the captured contents are not compromised. In another aspect, different size memories are in the emulator to emulate the user model. Larger memories have their ports monitored to reconstruct the contents of the memories, while smaller memories are captured in a snapshot RAM. Together the two different modes of tracing memory are used to provide visibility to the user of the entire user memory.
    Type: Grant
    Filed: February 21, 2007
    Date of Patent: January 31, 2012
    Assignee: Mentor Graphics Corporation
    Inventors: Peer Schmitt, Philippe Diehl, Charles Selvidge, Cyril Quennesson
  • Patent number: 8095738
    Abstract: A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein.
    Type: Grant
    Filed: June 15, 2009
    Date of Patent: January 10, 2012
    Assignee: International Business Machines Corporation
    Inventors: Michael Thomas Benhase, Lawrence Yiumchee Chiu, Lokesh Mohan Gupta, Yu-Cheng Hsu
  • Patent number: 8074026
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: May 10, 2006
    Date of Patent: December 6, 2011
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8065488
    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.
    Type: Grant
    Filed: October 20, 2010
    Date of Patent: November 22, 2011
    Assignee: Intel Corporation
    Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin
  • Patent number: 8060700
    Abstract: A system and method for cleaning dirty data in an intermediate cache are disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.
    Type: Grant
    Filed: December 8, 2008
    Date of Patent: November 15, 2011
    Assignee: NVIDIA Corporation
    Inventors: David B. Glasco, Peter B. Holmqvist, George R. Lynch, Patrick R. Marchand, James Roberts, John H. Edmondson
  • Patent number: 8060689
    Abstract: A method includes configuring a flash memory device including a first memory sector having a primary memory sector correspondence, a second memory sector having an alternate memory sector correspondence, and a third memory sector having a free memory sector correspondence, copying a portion of the primary memory sector to the free memory sector, erasing the primary memory sector, and changing a correspondence of each of the first memory sector, the second memory sector, and the third memory sector.
    Type: Grant
    Filed: May 4, 2010
    Date of Patent: November 15, 2011
    Assignee: Pitney Bowes Inc.
    Inventors: Wesley A. Kirschner, Gary S. Jacobson, John A. Hurd, G. Thomas Atthens, Steven J. Pauly, Richard C. Day, Jr.
  • Patent number: 8051271
    Abstract: Address translation circuitry for translating virtual addresses to physical addresses for a data processor in response to access requests from said data processor targeting virtual addresses is disclosed.
    Type: Grant
    Filed: July 1, 2008
    Date of Patent: November 1, 2011
    Assignee: ARM Limited
    Inventors: Jeremy Piers Davies, David Hennah Mansell, Richard Roy Grisenthwaite
  • Patent number: 8015383
    Abstract: Management of virtual memory allocated by a virtual machine control program to a plurality of virtual machines. Each of the virtual machines has an allocation of virtual private memory divided into working memory, cache memory and swap memory. The virtual machine control program determines that it needs additional virtual memory allocation, and in response, makes respective requests to the virtual machines to convert some of their respective working memory and/or cache memory to swap memory. At another time, the virtual machine control program determines that it needs less virtual memory allocation, and in response, makes respective requests to the virtual machines to convert some of their respective swap memory to working memory and/or cache memory.
    Type: Grant
    Filed: June 27, 2007
    Date of Patent: September 6, 2011
    Assignee: International Business Machines Corporation
    Inventors: Steven Shultz, Xenia Tkatschow
  • Patent number: 7996875
    Abstract: An adaptive timeshift service is described. In embodiment(s), television content can be distributed from a live content server to television client devices, and the television content that is distributed from the live content server can be recorded at a timeshift server. Recorded television content can then be distributed from the timeshift server when requested by a television client device. An additional timeshift server can be allocated, and both the television content from the live content server and the recorded television content from the timeshift server can be written to a buffer of the additional timeshift server.
    Type: Grant
    Filed: May 20, 2008
    Date of Patent: August 9, 2011
    Assignee: Microsoft Corporation
    Inventors: Terry Q Guo, Hui Wan
  • Patent number: 7996615
    Abstract: A method to associate a storage policy with a cache region is disclosed. In this method, a cache region associated with an application is created. The application runs on virtual machines, and where a first virtual machine has a local memory cache that is private to the first virtual machine. The first virtual machine additionally has a shared memory cache that is shared by the first virtual machine and a second virtual machine. Additionally, the cache region is associated with a storage policy. Here, the storage policy specifies that a first copy of an object to be stored in the cache region is to be stored in the local memory cache and that a second copy of the object to be stored in the cache region is to be stored in the shared memory cache.
    Type: Grant
    Filed: July 7, 2010
    Date of Patent: August 9, 2011
    Assignee: SAP AG
    Inventors: Galin Galchev, Frank Kilian, Oliver Luik, Dirk Marwinski, Petio G. Petev
  • Patent number: 7984243
    Abstract: A cache memory according to the present invention includes a W flag setting unit that modifies order data indicating an access order per cache entry that holds a data unit of a cache so as to reflect an actual access order and a replace unit that selects a cache entry for replacement based on the modified order data and replaces the cache entry.
    Type: Grant
    Filed: November 2, 2004
    Date of Patent: July 19, 2011
    Assignee: Panasonic Corporation
    Inventors: Hazuki Kawai, Ryuta Nakanishi, Tetsuya Tanaka, Shuji Miyasaka
  • Patent number: 7970989
    Abstract: A hard disk cache includes entries to be written to a disk, and also includes ordering information describing the order that they should be written to the disk. Data may be written from the cache to the disk in the order specified by the ordering information. In some situations, data may be written out of order. Further, in some situations, clean data from the cache may be combined with dirty data from the cache when performing a cache flush.
    Type: Grant
    Filed: June 30, 2006
    Date of Patent: June 28, 2011
    Assignee: Intel Corporation
    Inventor: Jeanna N. Matthews
  • Patent number: 7962499
    Abstract: In an example of an embodiment of the invention, a repeating pattern is identified within stored data comprising a plurality of data files, each data file comprising at least a header section and a data section stored in an unknown format. At least one occurrence of the repeating pattern is identified as a header section of a respective data file, and a data section of the respective data file is identified based, at least in part, on a location of the at least one occurrence of the repeating pattern. The identified data section of the respective data file is backed up. Systems are also disclosed.
    Type: Grant
    Filed: August 16, 2007
    Date of Patent: June 14, 2011
    Assignee: FalconStor, Inc.
    Inventor: Wai Lam
  • Patent number: 7958311
    Abstract: Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed.
    Type: Grant
    Filed: May 30, 2008
    Date of Patent: June 7, 2011
    Assignee: International Business Machines Corporation
    Inventors: Richard Edward Matick, Jaime H. Moreno, Malcolm Scott Ware
  • Patent number: 7949834
    Abstract: According to the methods and apparatus taught herein, processor caching policies are determined using cache policy information associated with a target memory device accessed during a memory operation. According to one embodiment of a processor, the processor comprises at least one cache and a memory management unit. The at least one cache is configured to store information local to the processor. The memory management unit is configured to set one or more cache policies for the at least one cache. The memory management unit sets the one or more cache policies based on cache policy information associated with one or more target memory devices configured to store information used by the processor.
    Type: Grant
    Filed: January 24, 2007
    Date of Patent: May 24, 2011
    Assignee: QUALCOMM Incorporated
    Inventor: Michael William Morrow
  • Publication number: 20110099333
    Abstract: A method and apparatus for efficiently caching streaming and non-streaming data is described herein. Software, such as a compiler, identifies last use streaming instructions/operations that are the last instruction/operation to access streaming data for a number of instructions or an amount of time. As a result of performing an access to a cache line for a last use instruction/operation, the cache line is updated to a streaming data no longer needed (SDN) state. When control logic is to determine a cache line to be replaced, a modified Least Recently Used (LRU) algorithm is biased to select SDN state lines first to replace no longer needed streaming data.
    Type: Application
    Filed: October 20, 2010
    Publication date: April 28, 2011
    Inventors: Eric Sprangle, Anwar Rohillah, Robert Cavin