Combined Replacement Modes Patents (Class 711/134)
-
Patent number: 8898389Abstract: A mechanism is provided for managing a high speed memory. An index entry indicates a storage unit in the high speed memory. A corresponding non-free index is set for a different type of low speed memory. The indicated storage unit in the high speed memory is assigned to a corresponding low speed memory by including the index entry in the non-free index. The storage unit in the high speed memory is recovered by demoting the index entry from the non-free index. The mechanism acquires a margin performance loss corresponding to a respective non-free index in response to receipt of a demotion request. The mechanism compares the margin performance losses of the respective non-free indexes and selecting a non-free index whose margin performance loss satisfies a demotion condition as a demotion index and selects an index entry from the demotion index to perform the demotion operation.Type: GrantFiled: March 8, 2013Date of Patent: November 25, 2014Assignee: International Business Machines CorporationInventors: Xue D. Gao, Chao G. Li, Yang Liu, Yi Yang
-
Patent number: 8886886Abstract: Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.Type: GrantFiled: September 28, 2012Date of Patent: November 11, 2014Assignee: Apple Inc.Inventors: Sukalpa Biswas, Shinye Shiu, James Wang, Robert Hu
-
Patent number: 8879370Abstract: An optical disk apparatus which conducts overwriting of data on a rewritable optical disk or conducts write-once recording of data on a write-once optical disk includes a control unit for receiving a recording command which specifies a recording area and orders recording and receiving transfer data, and a collation unit for collating existing data on the optical disk with the transfer data. Upon reception of the recording command and the transfer data by the control unit, the existing data is collated with the transfer data by the collation unit, and overwrite recording of data in places where the transfer data is different from the existing data is conducted on the rewritable optical disk, or data in places where the transfer data is different from the existing data is recorded in an unrecorded area of the write-once optical disk.Type: GrantFiled: February 16, 2012Date of Patent: November 4, 2014Assignees: Hitachi Consumer Electronics Co., Ltd., Hitachi-LG Data Storage, Inc.Inventor: Masayuki Kobayashi
-
Patent number: 8868842Abstract: A WC resource usage is compared with an auto flush (AU) threshold Caf that is smaller than an upper limit Clmt, and when the WC resource usage exceeds the AF threshold Caf, the organizing state of a NAND memory 10 is checked. When the organizing of the NAND memory 10 has proceeded sufficiently, data is flushed from a write cache (WC) 21 to the NAND memory 10 early, so that the response to the subsequent write command is improved.Type: GrantFiled: December 28, 2009Date of Patent: October 21, 2014Assignee: Kabushiki Kaisha ToshibaInventors: Hirokuni Yano, Ryoichi Kato, Toshikatsu Hida
-
Patent number: 8862848Abstract: A data storage system comprises a controller, a first lower performance storage medium and a second higher performance storage medium. The controller is connected to the storage mediums and is arranged to control I/O access to the storage mediums. The controller is further arranged to store an image on the first storage medium, initiate a copy function from the first storage medium to the second storage medium, direct all I/O access for the image to the second storage medium, periodically age data from the second storage medium to the first storage medium, create a new empty bitmap for each period, and in response to an I/O access for data in the image, update the latest bitmap to indicate that the data has been accessed and update the previous bitmaps to indicate that the data has not been accessed.Type: GrantFiled: August 25, 2010Date of Patent: October 14, 2014Assignee: International Business Machines CorporationInventors: Carlos Francisco Fuente, William James Scales, Barry Douglas Whyte
-
Patent number: 8843721Abstract: A data storage system comprises a controller, a first lower performance storage medium and a second higher performance storage medium. The controller is connected to the storage mediums and is arranged to control I/O access to the storage mediums. The controller is further arranged to store an image on the first storage medium, initiate a copy function from the first storage medium to the second storage medium, direct all I/O access for the image to the second storage medium, periodically age data from the second storage medium to the first storage medium, create a new empty bitmap for each period, and in response to an I/O access for data in the image, update the latest bitmap to indicate that the data has been accessed and update the previous bitmaps to indicate that the data has not been accessed.Type: GrantFiled: March 14, 2013Date of Patent: September 23, 2014Assignee: International Business Machines CorporationInventors: Carlos Francisco Fuente, William James Scales, Barry Douglas Whyte
-
Patent number: 8832383Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.Type: GrantFiled: May 20, 2013Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
-
Patent number: 8825951Abstract: A mechanism is provided for managing a high speed memory. An index entry indicates a storage unit in the high speed memory. A corresponding non-free index is set for a different type of low speed memory. The indicated storage unit in the high speed memory is assigned to a corresponding low speed memory by including the index entry in the non-free index. The storage unit in the high speed memory is recovered by demoting the index entry from the non-free index. The mechanism acquires a margin performance loss corresponding to a respective non-free index in response to receipt of a demotion request. The margin performance loss represents a change in a processor read operation time caused by performing a demotion operation in a corresponding non-free index. The mechanism compares the margin performance losses of the respective non-free indexes and selecting a non-free index whose margin performance loss satisfies a demotion condition as a demotion index.Type: GrantFiled: March 27, 2012Date of Patent: September 2, 2014Assignee: International Business Machines CorporationInventors: Xue D. Gao, Chao Guang Li, Yang Liu, Yi Yang
-
Patent number: 8812791Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.Type: GrantFiled: October 8, 2013Date of Patent: August 19, 2014Assignee: Google Inc.Inventors: Timo Burkard, David Presotto
-
Patent number: 8805885Abstract: Under the present invention, a hierarchical tree and corresponding Least Recently Used (LRU) list are provided. Both include a predetermined quantity of nodes that are associated with invariant data objects. The nodes of the tree typically include a set of pointers that indicate a position/arrangement of the associated invariant data objects in the LRU list, and a set of pointers that indicate a logical relationship among the other nodes.Type: GrantFiled: October 30, 2007Date of Patent: August 12, 2014Assignee: International Business Machines CorporationInventors: Peter W. Burka, Barry M. Genova
-
Publication number: 20140215160Abstract: A method of using a buffer within an indexing accelerator during periods of inactivity, comprising flushing indexing specific data located in the buffer, disabling a controller within the indexing accelerator, handing control of the buffer over to a higher level cache, and selecting one of a number of operation modes of the buffer. An indexing accelerator, comprising a controller and a buffer communicatively coupled to the controller, in which, during periods of inactivity, the controller is disabled and a buffer operating mode among a number of operating modes is chosen under which the buffer will be used.Type: ApplicationFiled: January 30, 2013Publication date: July 31, 2014Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventor: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
-
Patent number: 8793355Abstract: Techniques for directory data resolution are disclosed. In one particular exemplary embodiment, the techniques may be realized as a method for directory data resolution comprising receiving data identifying one or more groups of interest of a directory server, traversing, using a processor, one or more directory entries contained in hierarchical directory data, the traversal starting at a directory entry corresponding to a current group of interest, reading a first directory entry to identify a member contained in the first directory entry, adding, in the event a member is contained in the first directory entry, the current group of interest to a mapping for the member. The method may also include use of caching and recursion.Type: GrantFiled: April 27, 2010Date of Patent: July 29, 2014Assignee: Symantec CorporationInventors: Nathan Moser, Ayman Mobarak, Chad Jamart
-
Patent number: 8769209Abstract: An apparatus and method for improving cache performance in a computer system having a multi-level cache hierarchy. For example, one embodiment of a method comprises: selecting a first line in a cache at level N for potential eviction; querying a cache at level M in the hierarchy to determine whether the first cache line is resident in the cache at level M, wherein M<N; in response to receiving an indication that the first cache line is not resident at level M, then evicting the first cache line from the cache at level N; in response to receiving an indication that the first cache line is resident at level M, then retaining the first cache line and choosing a second cache line for potential eviction.Type: GrantFiled: December 20, 2010Date of Patent: July 1, 2014Assignee: Intel CorporationInventors: Aamer Jaleel, Simon C. Steely, Jr., Eric R. Borch, Malini K. Bhandaru, Joel S. Emer
-
Publication number: 20140181387Abstract: Data caching methods and systems are provided. A method is provided for a hybrid cache system that dynamically changes modes of one or more cache rows of a cache between an un-split mode having a first tag field and a first data field to a split mode having a second tag field, a second data field being smaller than the first data field and a mapped page field to improve the cache access efficiency of a workflow being executed in a processor. A hybrid cache system is provided in which the cache is configured to operate one or more cache rows in an un-split mode or in a split mode. The system is configured to dynamically change modes of the cache rows from the un-split mode to the split mode to improve the cache access efficiency of a workflow being executed by the processor.Type: ApplicationFiled: December 21, 2012Publication date: June 26, 2014Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Matthew R. Poremba, Gabriel H. Loh
-
Publication number: 20140164709Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache including a cache controller (122); and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit is adapted to create a log (200) in the memory prior to running the virtual machine in said first operating mode; the cache controller is adapted to transfer a modified cache line from the cache to the memory; and write only the memory address of the transferred modified cache line in the log; and the processor unit is further adapted to update a further image of the virtual machine in a different memory location, e.g. on another computer system, by retrieving the memory addresses stored in the log, retrieve the modified cache lines from the memory addresses and update the further image with said modifications.Type: ApplicationFiled: December 11, 2012Publication date: June 12, 2014Applicant: International Business Machines CorporationInventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, William J. Starke
-
Publication number: 20140136791Abstract: A system and method for managing data within a cache is described. In sonic example embodiments, the system identifies and/or tracks consumers of data located within a cache, and maintains the data within the cache based on determining whether there is an active consumer of the data.Type: ApplicationFiled: November 9, 2012Publication date: May 15, 2014Applicant: SAP AGInventor: Toni Fabijancic
-
Publication number: 20140129778Abstract: An apparatus for use in telecommunications system comprises a cache memory shared by multiple clients and a controller for controlling the shared cache memory. A method of controlling the cache operation in a shared cache memory apparatus is also disclosed. The apparatus comprises a cache memory accessible by a plurality of clients and a controller configured to allocate cache lines of the cache memory to each client according to a line configuration. The line configuration comprises, for each client, a maximum allocation of cache lines that each client is permitted to access. The controller is configured to, in response to a memory request from one of the plurality of clients that has reached its maximum allocation of cache lines, allocate a replacement cache line to the client from cache lines already allocated to the client when no free cache lines in the cache are available.Type: ApplicationFiled: November 2, 2012Publication date: May 8, 2014Applicant: RESEARCH IN MOTION LIMITEDInventor: Simon John DUGGINS
-
Patent number: 8719511Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.Type: GrantFiled: October 8, 2013Date of Patent: May 6, 2014Assignee: Google Inc.Inventors: Timo Burkard, David Presotto
-
Publication number: 20140095799Abstract: Systems and methods may provide for determining whether a memory access request is error-tolerant, and routing the memory access request to a reliable memory region if the memory access request is error-tolerant. Moreover, the memory access request may be routed to an unreliable memory region if the memory access request is error-tolerant. In one example, use of the unreliable memory region enables a reduction in the minimum operating voltage level for a die containing the reliable and unreliable memory regions.Type: ApplicationFiled: September 29, 2012Publication date: April 3, 2014Inventors: Zhen Fang, Shih-Lien Lu, Ravishankar Iyer, Srihari Makineni
-
Publication number: 20140089595Abstract: Embodiments of the invention describe an apparatus, system and method for utilizing a utility and lifetime based cached replacement policy as described herein. For processors having one or more processor cores and a cache memory accessible via the processor core(s), embodiments of the invention describe a cache controller to determine, for a plurality of cache blocks in the cache memory, an estimated utility and lifetime of the contents of each cache block, the utility of a cache block to indicate a likelihood of use its contents, the lifetime of a cache block to indicate a duration of use of its contents. Upon receiving a cache access request resulting in a cache miss, said cache controller may select one of the cache blocks to be replaced based, at least in part, on one of the estimated utility or estimated lifetime of the cache block.Type: ApplicationFiled: December 23, 2011Publication date: March 27, 2014Inventors: Nevin Hyuseinova, Qiong Cai, Serkan Ozdemir, Ayose J. Falcon
-
Patent number: 8683128Abstract: A data processing system includes a multi-level cache hierarchy including a lowest level cache, a processor core coupled to the multi-level cache hierarchy, and a memory controller coupled to the lowest level cache and to a memory bus of a system memory. The memory controller includes a physical read queue that buffers data read from the system memory via the memory bus and a physical write queue that buffers data to be written to the system memory via the memory bus. The memory controller grants priority to write operations over read operations on the memory bus based upon a number of dirty cachelines in the lowest level cache memory.Type: GrantFiled: May 7, 2010Date of Patent: March 25, 2014Assignee: International Business Machines CorporationInventors: David M. Daly, Benjiman L. Goodman, Hillery C. Hunter, William J. Starke, Jeffrey A. Stuecheli
-
Patent number: 8677071Abstract: Techniques are described for controlling processor cache memory within a processor system. Cache occupancy values for each of a plurality of entities executing the processor system can be calculated. A cache replacement algorithm uses the cache occupancy values when making subsequent cache line replacement decisions. In some variations, entities can have occupancy profiles specifying a maximum cache quota and/or a minimum cache quota which can be adjusted to achieve desired performance criteria. Related methods, systems, and articles are also described.Type: GrantFiled: March 25, 2011Date of Patent: March 18, 2014Assignee: Virtualmetrix, Inc.Inventors: Gary Allen Gibson, Valeri Popescu
-
Patent number: 8671248Abstract: Memory Access Coloring provides architecture support that allows software to classify memory accesses into different congruence classes by specifying a color for each memory access operation. The color information is received and recorded by the underlying system with appropriate granularity. This allows hardware to monitor color-based cache monitoring information and provide such feedback to the software to enable various runtime optimizations. It also enables enforcement of different memory consistency models for memory regions with different colors at the same time.Type: GrantFiled: January 5, 2007Date of Patent: March 11, 2014Assignee: International Business Machines CorporationInventors: Xiaowei Shen, Robert W. Wisniewski, Orran Krieger
-
Patent number: 8656185Abstract: A method and apparatus for preventing compromise of data stored in a memory by assuring the deletion of data and minimizing data remanence affects is disclosed. The method comprises the steps of monitoring the memory to detect tampering, and if tampering is detected, generating second signals having second data differing from the first data autonomously from the first processor; providing the generated second signals to the input of the memory; and storing the second data in the memory. Several embodiments are disclosed, including self-powered embodiments and those which use separate, dedicated processors to generate, apply, and verify the zeroization data.Type: GrantFiled: July 28, 2005Date of Patent: February 18, 2014Assignee: SafeNet, Inc.Inventors: Michael Masaji Furusawa, Chieu The Nguyen
-
Patent number: 8645627Abstract: A data processing system includes a multi-level cache hierarchy including a lowest level cache, a processor core coupled to the multi-level cache hierarchy, and a memory controller coupled to the lowest level cache and to a memory bus of a system memory. The memory controller includes a physical read queue that buffers data read from the system memory via the memory bus and a physical write queue that buffers data to be written to the system memory via the memory bus. The memory controller grants priority to write operations over read operations on the memory bus based upon a number of dirty cachelines in the lowest level cache memory.Type: GrantFiled: April 16, 2012Date of Patent: February 4, 2014Assignee: International Business Machines CorporationInventors: David M. Daly, Benjiman L. Goodman, Hillery C. Hunter, William J. Starke, Jeffrey A. Stuecheli
-
Publication number: 20130339623Abstract: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.Type: ApplicationFiled: January 22, 2013Publication date: December 19, 2013Applicant: International Business Machines CorporationInventors: Ekaterina M. Ambroladze, Michael A. Blake, Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Arthur J. O'Neill
-
Publication number: 20130339622Abstract: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.Type: ApplicationFiled: June 14, 2012Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ekaterina M. Ambroladze, Michael Blake, Tim Bronson, Garrett Drapala, Pak-kin Mak, Arthur J. O'Neill
-
Patent number: 8607001Abstract: Embodiments of the present invention provide a method, an apparatus, and a proxy server for selecting cache replacement policies to reduce manual participation and switch cache replacement policies automatically. The method includes: obtaining statistical data of multiple cache replacement policies that are running simultaneously; and switching, according to an event of policy decision for cache replacement policies and the statistical data, an active cache replacement policy to a cache replacement policy that complies with a policy decision requirement. The automatic switching of cache replacement policies lowers the technical requirements on administrators. In addition, in the operation process of a proxy cache, a cache replacement policy that is applicable to a current scenario and meets a performance expectation of a user can be selected automatically, so as to make the technical solution feature good adaptability.Type: GrantFiled: August 11, 2011Date of Patent: December 10, 2013Assignee: Huawei Technologies Co., Ltd.Inventors: Yuping Zhao, Hanyu Wei, Hao Wang, Jian Chen
-
Publication number: 20130318305Abstract: A method for configuring a large hybrid memory subsystem having a large cache size in a computing system where one or more performance metrics of the computing system are expressed as an explicit function of configuration parameters of the memory subsystem and workload parameters of the memory subsystem. The computing system hosts applications that utilize the memory subsystem, and the performance metrics cover the use of the memory subsystem by the applications. A performance goal containing values for the performance metric is identified for the computing system. These values for the performance metrics are used in the explicit function of performance metrics, configuration parameters and workload parameters to calculate values for the configuration parameters that achieve the identified performance goal. The calculated values of the configuration parameters are implemented in the memory subsystem.Type: ApplicationFiled: July 30, 2013Publication date: November 28, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: John Alan Bivens, Parijat Dube, Michael Mi Tsao, Li Zhang
-
Publication number: 20130318302Abstract: A cache controller includes an entry list determination module and a cache replacement module. The entry list determination module is configured to receive a quality of service (QoS) value of a process, and output a replaceable entry list based on the received QoS value. The cache replacement module is configured to write data in an entry included in the replaceable entry list. The process is one of a plurality of processes, each having a QoS value, and the replaceable entry list is one of a plurality of replaceable entry lists, each including a plurality of entries and each corresponding to one of the QoS values. The number of total entries is allocated to processes based on the QoS values of the processes.Type: ApplicationFiled: March 14, 2013Publication date: November 28, 2013Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: Moon-Gyung KIM
-
Patent number: 8583874Abstract: A method is provided for performing caching in a processing system including at least one data cache. The method includes the steps of: determining whether each of at least a subset of cache entries stored in the data cache comprises data that has been loaded using fetch ahead (FA); associating an identifier with each cache entry in the subset of cache entries, the identifier indicating whether the cache entry comprises data that has been loaded using FA; and implementing a cache replacement policy for controlling replacement of at least a given cache entry in the data cache with a new cache entry as a function of the identifier associated with the given cache entry.Type: GrantFiled: December 14, 2010Date of Patent: November 12, 2013Assignee: LSI CorporationInventors: Leonid Dubrovin, Alexander Rabinovitch
-
Patent number: 8583872Abstract: A cache memory having a sector function, operating in accordance with a set associative system, and performing a cache operation to replace data in a cache block in the cache way corresponding to a replacement cache way determined upon an occurrence of a cache miss comprises: storing sector ID information in association with each of the cache ways in the cache block specified by a memory access request; determining, upon the occurrence of the cache miss, replacement way candidates, in accordance with sector ID information attached to the memory access request and the stored sector ID information; selecting and outputting a replacement way from the replacement way candidates; and updating the stored sector ID information in association with each of the cache ways in the cache block specified by the memory access request, to the sector ID information attached to the memory access request.Type: GrantFiled: August 19, 2008Date of Patent: November 12, 2013Assignee: Fujitsu LimitedInventors: Shuji Yamamura, Mikio Hondou, Iwao Yamazaki, Toshio Yoshida
-
Patent number: 8578097Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: October 24, 2011Date of Patent: November 5, 2013Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8572327Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.Type: GrantFiled: August 19, 2011Date of Patent: October 29, 2013Assignee: Google Inc.Inventors: Timo Burkard, David Presotto
-
Patent number: 8566531Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.Type: GrantFiled: August 21, 2009Date of Patent: October 22, 2013Assignee: Google Inc.Inventors: Timo Burkard, David Presotto
-
Patent number: 8566527Abstract: A system and a method are described, whereby a data cache enables the realization of an efficient design of a usage analyzer for monitoring subscriber access to a communications network. By exploiting the speed advantages of cache memory, as well as adopting innovative data loading and retrieval choices, significant performance improvements in the time required to access the necessary data records can be realized.Type: GrantFiled: July 22, 2008Date of Patent: October 22, 2013Assignee: Bridgewater Systems Corp.Inventors: Timothy James Reidel, Li Zou
-
Patent number: 8560765Abstract: Various embodiments of the present invention provide systems, methods and circuits for use of a memory system. As one example, an electronics system is disclosed that includes a memory bank, a memory access controller circuit, and an encoding circuit. The memory bank includes a plurality of multi-bit memory cells that each is operable to hold at least two bits. The memory access controller circuit is operable to determine a use frequency of a data set maintained in the memory bank. The encoding circuit is operable to encode the data set to yield an encoded output for writing to the memory bank. The encoding level for the data set is selected based at least in part on the use frequency of the data set.Type: GrantFiled: March 2, 2010Date of Patent: October 15, 2013Assignee: LSI CorporationInventor: Robert W. Warren
-
Patent number: 8549229Abstract: Systems and methods for managing a storage device are disclosed. Generally, in a host to which a storage device is operatively coupled, wherein the storage device includes a cache for storing one or more discardable files, a file is identified to be uploaded to an external location. A determination is made whether sufficient free space exists in the cache to pre-stage the file for upload to the external location and the file is stored in the cache upon determining that sufficient free space exists in the cache to pre-stage the file for upload to the external location, wherein pre-stating prepares a file for opportunistically uploading such file in accordance with an uploading policy.Type: GrantFiled: September 30, 2010Date of Patent: October 1, 2013Assignee: SanDisk IL Ltd.Inventors: Joseph R. Meza, Judah Gamliel Hahn, Henry Hutton, Leah Sherry
-
Patent number: 8543769Abstract: A mechanism is provided in a virtual machine monitor for fine grained cache allocation in a shared cache. The mechanism partitions a cache tag into a most significant bit (MSB) portion and a least significant bit (LSB) portion. The MSB portion of the tags is shared among the cache lines in a set. The LSB portion of the tags is private, one per cache line. The mechanism allows software to set the MSB portion of tags in a cache to allocate sets of cache lines. The cache controller determines whether a cache line is locked based on the MSB portion of the tag.Type: GrantFiled: July 27, 2009Date of Patent: September 24, 2013Assignee: International Business Machines CorporationInventors: Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
-
Patent number: 8514677Abstract: A method of recording a temporary defect list on a write-once recording medium, a method of reproducing the temporary defect list, an apparatus for recording and/or reproducing the temporary defect list, and the write-once recording medium. The method of recording a temporary defect list for defect management on a write-once recording medium includes recording the temporary defect list, which is created while data is recorded on the write-once recording medium, in at least one cluster of the write-once recording medium, and verifying if a defect is generated in the at least one cluster. Then, the method includes re-recording data originally recorded in a defective cluster in another cluster, and recording pointer information, which indicates a location of the at least one cluster where the temporary defect list is recorded, on the write-once recording medium.Type: GrantFiled: December 21, 2007Date of Patent: August 20, 2013Assignee: Samsung Electronics Co., Ltd.Inventors: Sung-hee Hwang, Jung-wan Ko
-
Patent number: 8495293Abstract: A storage system writes a data element stored in a primary volume to a secondary volume constituting a volume pair with the primary volume in accordance with a selected storage mode, which is a data storage mode selected from a plurality of types of data storage modes. This storage system is provided with a function for switching the above-mentioned selected storage mode from a currently selected data storage mode to a different type of data storage mode.Type: GrantFiled: January 4, 2008Date of Patent: July 23, 2013Assignee: Hitachi, Ltd.Inventors: Ai Satoyama, Yoshiaki Eguchi
-
Patent number: 8495300Abstract: A method and apparatus for repopulating a cache are disclosed. At least a portion of the contents of the cache are stored in a location separate from the cache. Power is removed from the cache and is restored some time later. After power has been restored to the cache, it is repopulated with the portion of the contents of the cache that were stored separately from the cache.Type: GrantFiled: March 3, 2010Date of Patent: July 23, 2013Assignee: ATI Technologies ULCInventors: Philip Ng, Jimshed B. Mirza, Anthony Asaro
-
Patent number: 8478944Abstract: Most recently accessed frames are locked in a cache memory. The most recently accessed frames are likely to be accessed by a task again in the near future and may be locked at the beginning of a task switch or interrupt to improve cache performance. The list of most recently used frames is updated as a task executes and may be embodied as a list of frame addresses or a flag associated with each frame. The list of most recently used frames may be separately maintained for each task if multiple tasks may interrupt each other. An adaptive frame unlocking mechanism is also disclosed that automatically unlocks frames that may cause a significant performance degradation for a task. The adaptive frame unlocking mechanism monitors a number of times a task experiences a frame miss and unlocks a given frame if the number of frame misses exceeds a predefined threshold.Type: GrantFiled: July 27, 2012Date of Patent: July 2, 2013Assignee: Agere Systems LLCInventors: Harry Dwyer, John S. Fernando
-
Patent number: 8473684Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.Type: GrantFiled: December 22, 2009Date of Patent: June 25, 2013Assignee: International Business Machines CorporationInventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
-
Patent number: 8473686Abstract: Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.Type: GrantFiled: May 5, 2012Date of Patent: June 25, 2013Assignee: Hewlett-Packard Development Company, L.P.Inventor: Blaine D Gaither
-
Patent number: 8473687Abstract: Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction.Type: GrantFiled: May 5, 2012Date of Patent: June 25, 2013Assignee: Hewlett-Packard Development Company, L.P.Inventor: Blaine D Gaither
-
Patent number: 8458433Abstract: A method and apparatus creates and manages persistent memory (PM) in a multi-node computing system. A PM Manager in the service node creates and manages pools of nodes with various sizes of PM. A node manager uses the pools of nodes to load applications to the nodes according to the size of the available PM. The PM Manager can dynamically adjust the size of the PM according to the needs of the applications based on historical use or as determined by a system administrator. The PM Manager works with an operating system kernel on the nodes to provide persistent memory for application data and system metadata. The PM Manager uses the persistent memory to load applications to preserve data from one application to the next. Also, the data preserved in persistent memory may be system metadata such as file system data that will be available to subsequent applications.Type: GrantFiled: October 29, 2007Date of Patent: June 4, 2013Assignee: International Business Machines CorporationInventors: Eric Lawrence Barsness, David L. Darrington, Patrick Joseph McCarthy, Amanda Peters, John Matthew Santosuosso
-
Patent number: 8458402Abstract: Various systems and methods can decide whether information being evicted from a level one (L1) operating system cache should be moved to a level two (L2) operating system cache. The L2 operating system cache can be implemented using a memory technology in which read performance differs from write performance. One method involves detecting that a portion of a file (e.g., a page) is being evicted from a L1 operating system cache. In response to detecting the imminent eviction of the portion of the file, the method determines whether the portion of the file has been read more frequently or written more frequently. Based upon this determination (e.g., in response to determining that the portion of the file has been read more frequently, if the L2 cache provides better read than write performance), the method decides to copy the portion of the file to the L2 operating system cache.Type: GrantFiled: August 16, 2010Date of Patent: June 4, 2013Assignee: Symantec CorporationInventor: Ashish Karnik
-
Patent number: 8458416Abstract: Various embodiments of the present invention provide systems and methods for selecting data encoding. As an example, some embodiments of the present invention provide methods that include receiving a data set to be written to a plurality of multi-bit memory cells that are each operable to hold at least two bits. In addition, the methods include determining a characteristic of the data set, and encoding the data set. The level of encoding is selected based at least in part on the characteristic of the data set. In some instances of the aforementioned embodiments, the characteristic of the data set indicates an expected frequency of access of the data set from the plurality of multi-bit memory cells.Type: GrantFiled: January 22, 2010Date of Patent: June 4, 2013Assignee: LSI CorporationInventors: Robert W. Warren, Robb Mankin
-
Publication number: 20130138892Abstract: A system and method for efficient cache data access in a large row-based memory of a computing system. A computing system includes a processing unit and an integrated three-dimensional (3D) dynamic random access memory (DRAM). The processing unit uses the 3D DRAM as a cache. Each row of the multiple rows in the memory array banks of the 3D DRAM stores at least multiple cache tags and multiple corresponding cache lines indicated by the multiple cache tags. In response to receiving a memory request from the processing unit, the 3D DRAM performs a memory access according to the received memory request on a given cache line indicated by a cache tag within the received memory request. Rather than utilizing multiple DRAM transactions, a single, complex DRAM transaction may be used to reduce latency and power consumption.Type: ApplicationFiled: November 30, 2011Publication date: May 30, 2013Inventors: Gabriel H. Loh, Mark D. Hill