Of The Least Frequently Used Type, E.g., With Individual Count Value, Etc. (epo) Patents (Class 711/E12.071)
-
Patent number: 12039450Abstract: A method of adaptive batch reuse includes prefetching, from a CPU to a GPU, a first plurality of mini-batches comprising a subset of a training dataset. The GPU trains the neural network for the current epoch by reusing, without discard, the first plurality of mini-batches in training the neural network for the current epoch based on a reuse count value. The GPU also runs a validation set to identify a validation error for the current epoch. If the validation error for the current epoch is less than a validation error of a previous epoch, the reuse count value is incremented for a next epoch. However, if the validation error for the current epoch is greater than a validation error of a previous epoch, the reuse count value is decremented for the next epoch.Type: GrantFiled: May 28, 2019Date of Patent: July 16, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Abhinav Vishnu
-
Patent number: 11921632Abstract: Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.Type: GrantFiled: March 15, 2022Date of Patent: March 5, 2024Assignee: Intel CorporationInventors: Yao Zu Dong, Kun Tian, Fengguang Wu, Jingqi Liu
-
Patent number: 11893266Abstract: A method of managing data during execution of an application for use in a system that includes a host memory, a near memory, and a near device associated with the near memory. The application uses a working set of data that is distributed between the far memory and the near memory. The method includes counting a number of times that the near device accesses a unit of the working set of data from the far memory, determining whether the number of times exceeds a dynamically changing access counter threshold, wherein the dynamically changing access counter threshold is calculated dynamically based on a static threshold that is set for the system, and responsive to determining that the number of times exceeds the dynamically changing access counter threshold, migrating the unit of data from the far memory to the near memory.Type: GrantFiled: April 17, 2020Date of Patent: February 6, 2024Assignee: University of Pittsburgh—Of the Commonwealth System of Higher EducationInventors: Debashis Ganguly, Rami G. Melhem, Ziyu Zhang, Jun Yang
-
Patent number: 11880309Abstract: The state of cache lines transferred into an out of caches of processing hardware is tracked by monitoring hardware. The method of tracking includes monitoring the processing hardware for cache coherence events on a coherence interconnect between the processing hardware and monitoring hardware, determining that the state of a cache line has changed, and updating a hierarchical data structure to indicate the change in the state of said cache line. The hierarchical data structure includes a first level data structure including first bits, and a second level data structure including second bits, each of the first bits associated with a group of second bits. The step of updating includes setting one of the first bits and one of the second bits in the group corresponding to the first bit that is being set, according to an address of said cache line.Type: GrantFiled: June 23, 2021Date of Patent: January 23, 2024Assignee: VMware, Inc.Inventors: Nishchay Dua, Andreas Nowatzyk, Isam Wadih Akkawi, Pratap Subrahmanyam, Venkata Subhash Reddy Peddamallu, Adarsh Seethanadi Nayak
-
Patent number: 11875234Abstract: Certain example embodiments herein relate to techniques for automatically correcting and completing data in sparse datasets. Records in the dataset are divided into groups with properties having similar values. For each group, one or more properties of the records therein that is/are to be ignored is/are identified, based on record distances relative to the records in the group, and distances among values for each of the properties of the records in the respective group. The records in the groups are further divided into sub-groups without regard to the one or more properties that is/are to be ignored. The sub-groups include a smaller and more cohesive set of records. For each sub-group: based on the records therein, predicted values to be applied to values identified as being empty but needing to be filled in are determined; and those predicted values are applied. The corrected/completed dataset is provided as output.Type: GrantFiled: July 17, 2020Date of Patent: January 16, 2024Assignee: SOFTWARE AGInventors: Vijay Anand Chidambaram, Ulrich Kalex
-
Patent number: 11700289Abstract: Methods and systems for analysis of a plurality of channels that provide a remote desktop session are described herein. Channel metrics for each of a plurality of communication channels may be received. Each of the plurality of communication channels may be configured to deliver, to a computing device and via a network, different aspects of a remote desktop session. A plurality of channel scores may be determined for each communication channel of the plurality of communication channels. Based on the plurality of channel scores, an aggregate score may be determined. Based on the aggregate score satisfying a threshold, a notification may be transmitted. For example, an indication of one or more executable scripts predicted to improve a performance of one or more of the plurality of communication channels may be transmitted.Type: GrantFiled: July 6, 2021Date of Patent: July 11, 2023Assignee: Citrix Systems, Inc.Inventors: Rahul Gupta, Dhawal Patel, Divya Ranjan, Himanshu Pandey, Pn Prathima, Rupak Das
-
Patent number: 11625335Abstract: Systems and methods provide for optimizing utilization of an Address Translation Cache (ATC). A network interface controller (NIC) can write information reserving one or more cache lines in a first level of the ATC to a second level of the ATC. The NIC can receive a request for a direct memory access (DMA) to an untranslated address in memory of a host computing system. The NIC can determine that the untranslated address is not cached in the first level of the ATC. The NIC can identify a selected cache line in the first level of the ATC to evict using the request and the second level of the ATC. The NIC can receive a translated address for the untranslated address. The NIC can cache the untranslated address in the selected cache line. The NIC can perform the DMA using the translated address.Type: GrantFiled: February 8, 2021Date of Patent: April 11, 2023Assignee: Cisco Technology, Inc.Inventors: Sagar Borikar, Ravikiran Kaidala Lakshman
-
Patent number: 11537617Abstract: The disclosed embodiments include a method for caching by a data system. The method includes automatically caching a portion of a data object from an external data source to a local cluster of nodes in accordance with a unit of caching. The portion of the data object can be selected for caching based on a frequency of accessing the portion of the data object. The portion of the data object in the cache is mapped to the external data source in accordance with a unit of hashing. The method further includes, responsive to the data system receiving a query for data stored in the external data source, obtaining query results that satisfy the received query by reading the portion of the cached data object instead of reading the data object from the external data source.Type: GrantFiled: April 28, 2020Date of Patent: December 27, 2022Assignee: Dremio CorporationInventors: Jacques Nadeau, Tomer Shiran, Arvind Arun Pande, Thomas W. Fry
-
Patent number: 11513874Abstract: A method and an apparatus for determining a usage level of a memory device to notify a running application to perform memory reduction operations selected based on the memory usage level are described. An application calls APIs (Application Programming Interface) integrated with the application codes in the system to perform memory reduction operations. A memory usage level is determined according to a memory usage status received from the kernel of a system. A running application is associated with application priorities ranking multiple running applications statically or dynamically. Selecting memory reduction operations and notifying a running application are based on application priorities. Alternatively, a running application may determine a mode of operation to directly reduce memory usage in response to a notification for reducing memory usage without using API calls to other software.Type: GrantFiled: October 1, 2020Date of Patent: November 29, 2022Assignee: APPLE INC.Inventors: Matthew G. Watson, James Michael Magee
-
Patent number: 11494309Abstract: A list of a first type of tracks in a cache is generated. A list of a second type of tracks in the cache is generated, wherein I/O operations are completed relatively faster to the first type of tracks than to the second type of tracks. A determination is made as to whether to demote a track from the list of the first type of tracks or from the list of the second type of tracks.Type: GrantFiled: April 26, 2021Date of Patent: November 8, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta
-
Patent number: 11442646Abstract: Storage devices are capable of identifying zones for sharing parity blocks across zones. Active zones may be segregated across multiple active zones having similar zone properties, and grouped so that parity buffers can be shared. By identifying zones for optimal parity sharing, storage devices and systems can: (i) maintain independent parity for all zones during initial zone writes (i.e. during an erased state when data is written directly to pages and not to the zones), (ii) track zone write pointers and frequency of writes in the zones, (iii) segregate zones with higher correlation and group them together, (iv) utilize these groupings placed across various channels so that zones with high correlations, comprising of the zones that are written together and at the same rate, share the parity buffers, and (v) load and XOR individual parity buffers for optimal parity sharing across all zones.Type: GrantFiled: February 26, 2021Date of Patent: September 13, 2022Assignee: Western Digital Technologies Inc.Inventor: Dinesh Kumar Agarwal
-
Patent number: 11340824Abstract: Described is a system (and method) for efficient object storage management when backing up data to a cloud-based object storage. The system may be implemented as part of a server (or gateway) that provides a backup service to a client device by acting as an intermediary when backing up data from the client device to a third-party cloud-based object storage. The system may implement various specialized procedures to efficiently store backup data as objects within the object storage. The procedures may include packing client data into objects of a consistent size to improve storage performance. The system may also improve storage performance and conserve storage by analyzing the data stored within an object and reallocating the data as necessary. More particularly, the system may efficiently reallocate data to new objects when the amount of live data within an object falls below a predetermined threshold.Type: GrantFiled: January 5, 2021Date of Patent: May 24, 2022Assignee: EMC IP Holding Company LLCInventors: Sunil Yadav, Ravi Vijayakumar Chitloor, Shelesh Chopra, Amarendra Behera, Tushar Dethe, Jigar Bhanushali, Deependra Singh, Himanshu Arora, Prabhat Kumar Dubey
-
Patent number: 9043570Abstract: Methods and apparatuses for implementing a system cache with quota-based control. Quotas may be assigned on a group ID basis to each group ID that is assigned to use the system cache. The quota does not reserve space in the system cache, but rather the quota may be used within any way within the system cache. The quota may prevent a given group ID from consuming more than a desired amount of the system cache. Once a group ID's quota has been reached, no additional allocation will be permitted for that group ID. The total amount of allocated quota for all group IDs can exceed the size of system cache, such that the system cache can be oversubscribed. The sticky state can be used to prioritize data retention within the system cache when oversubscription is being used.Type: GrantFiled: September 11, 2012Date of Patent: May 26, 2015Assignee: Apple Inc.Inventors: Sukalpa Biswas, Shinye Shiu, James Wang
-
Patent number: 9003099Abstract: In a disc device according to the present invention, when a controller 2 abandons a block from a cache memory 4 used as a primary cache, it is determined whether or not the number of readings of data in the block exceeds the specified number of times. Only when the number of readings exceeds the specified number of times, the block is written into an SSD 8 used as a secondary cache. When the number of readings is equal to or smaller than the specified number of times, the block is rewritten into an HDD 7.Type: GrantFiled: February 28, 2011Date of Patent: April 7, 2015Assignee: NEC CorporationInventor: Shun Kurita
-
Patent number: 8984240Abstract: Page faults during partition migration from a source computing system to a destination computing system are reduced by assigning each page used by a process as being hot or cold according to their frequency of use by the process. During a live partition migration, the cold or coldest (least frequently used) pages are copied to the destination server first, followed copying the warmer (less frequently used) and concluded by copying the hottest (most frequently used) pages. After all dirtied pages have been refreshed, cutover from the instance on the source server to the destination server is made. By transferring the warm and hot pages last (or later) in the migration process, the number of dirtied pages is reduced, thereby reducing page faults subsequent to the cutover.Type: GrantFiled: August 30, 2012Date of Patent: March 17, 2015Assignee: International Business Machines CorporationInventors: Vishal C. Aslot, Adekunle Bello, Brian W. Hart
-
Patent number: 8838903Abstract: A hierarchical data-storage system has a volatile storage medium, a first non-volatile storage medium, and a controller including a ranking engine tracking data writes to each of the memory mediums. Each medium is associated with a pre-set capacity threshold, and the controller, upon the volatile medium reaching its pre-set threshold, identifies one or more blocks of data as least-frequently written to the volatile medium, copies the data in those blocks to the non-volatile medium, and marks those blocks as available for new data writes, and the controller, upon the non-volatile medium reaching its pre-set threshold, identifies one or more blocks of data as least-frequently written to the non-volatile medium, and marks those blocks as available for new data writes from the volatile medium.Type: GrantFiled: February 4, 2010Date of Patent: September 16, 2014Assignee: Dataram, Inc.Inventor: Jason Caulkins
-
Patent number: 8806139Abstract: A technique is provided for cache management of a cache. The processing circuit determines a miss count and a hit position field during a previous execution of an instruction requesting that a data element be stored in a cache. The miss count and the hit position field are stored for a data element corresponding to an instruction that requests storage of the data element. The processing circuit places the data element in a hierarchical order based on the miss count and/or the hit position field. The hit position field includes a hierarchical position related to the data element in the cache.Type: GrantFiled: January 20, 2012Date of Patent: August 12, 2014Assignee: International Business Machines CorporationInventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-Lung K. Shum
-
Publication number: 20140115260Abstract: Implementations described and claimed herein provide a system and methods for prioritizing data in a cache. In one implementation, a priority level, such as critical, high, and normal, is assigned to cached data. The priority level dictates how long the data is cached and consequently, the order in which the data is evicted from the cache memory. Data assigned a priority level of critical will be resident in cache memory unless heavy memory pressure causes the system to reclaim memory and all data assigned a priority state of high or normal has been evicted. High priority data is cached longer than normal priority data, with normal priority data being evicted first. Accordingly, important data assigned a priority level of critical, such as a deduplication table, is kept resident in cache memory at the expense of other data, regardless of the frequency or recency of use of the data.Type: ApplicationFiled: October 18, 2012Publication date: April 24, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Mark Maybee, Lisa Week
-
Publication number: 20140115261Abstract: Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. The data contained within the level-two cache is managing using a cache list that manages and/or maintains data chunk entries added to the level-two cache based on a temporal access of the data chunk.Type: ApplicationFiled: October 18, 2012Publication date: April 24, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Mark Maybee, Mark J. Musante, Victor Latushkin
-
Publication number: 20140095783Abstract: Techniques for reducing a number of physical counters are provided. Logical counters may be associated with physical counters. The number of logical counters may be less than the number of physical counters. It may be determined if an association of a logical counter to a physical counter exists already. If not, a new association may be created. The physical counter associated with the logical counter may then be updated.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventor: Steven Glen Jorgensen
-
Publication number: 20140089613Abstract: A plurality of subgroups with a least recently used (LRU) list of data elements associated with count variables. The LRU lists have a top entry to store a most recently used data element and a bottom entry to store a least recently used data element. If a data element is accessed, then increase the value of the count variable and move the accessed data element to the top entry of the LRU list of the subgroup associated with the data element. If the value of the count variable of the accessed data element of the top entry is greater than a value of a count variable of a data element of a bottom entry of a LRU list of a subgroup with a higher priority, then swap the data element of the bottom entry with the accessed data element of the top entry.Type: ApplicationFiled: September 27, 2012Publication date: March 27, 2014Applicant: Hewlett-Packard Development Company, L.P.Inventor: Mykel John Kramer
-
Publication number: 20140082296Abstract: Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment.Type: ApplicationFiled: September 14, 2012Publication date: March 20, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin John ASH, Michael Thomas BENHASE, Lokesh Mohan GUPTA, Kenneth Wayne TODD, David Blair WHITWORTH
-
Publication number: 20140068176Abstract: Described embodiments provide a lookup engine that receives lookup requests including a requested key and a speculative add requestor. Iteratively, for each one of the lookup requests, the lookup engine searches each entry of a lookup table for an entry having a key matching the requested key of the lookup request. If the lookup table does not include an entry having a key matching the requested key, the lookup engine sends a miss indication corresponding to the lookup request to the control processor. If the speculative add requestor is set, the lookup engine speculatively adds the requested key to a free entry in the lookup table. Speculatively added keys are searchable in the lookup table for subsequent lookup requests to maintain coherency of the lookup table without creating duplicate key entries, comparing missed keys with each other or stalling the lookup engine to insert missed keys.Type: ApplicationFiled: August 31, 2012Publication date: March 6, 2014Inventors: Leonid Baryudin, Earl T. Cohen, Kent Wayne Wendorf
-
Publication number: 20140068207Abstract: Page faults during partition migration from a source computing system to a destination computing system are reduced by assigning each page used by a process as being hot or cold according to their frequency of use by the process. During a live partition migration, the cold or coldest (least frequently used) pages are copied to the destination server first, followed copying the warmer (less frequently used) and concluded by copying the hottest (most frequently used) pages. After all dirtied pages have been refreshed, cutover from the instance on the source server to the destination server is made. By transferring the warm and hot pages last (or later) in the migration process, the number of dirtied pages is reduced, thereby reducing page faults subsequent to the cutover.Type: ApplicationFiled: August 30, 2012Publication date: March 6, 2014Applicant: International Business Machines CorporationInventors: Vishal C. Aslot, Adekunle Bello, Brian W. Hart
-
Publication number: 20140052926Abstract: A system for managing memory operations. The system includes a processor executing instructions that cause the processor to read a first memory page from a storage device responsive to a request for the first memory page and store the first memory page to system memory. Based on a pre-established set of association rules, one or more associated memory pages are identified that are related to the first memory page. The associated memory pages are read from the storage device and compressed to generate corresponding compressed associated memory pages. The compressed associated memory pages are also stored to the system memory to enable faster access to the associated memory pages during processing of the first memory page. The compressed associated memory pages are individually decompressed in response to the particular page being required for use during processing.Type: ApplicationFiled: August 20, 2012Publication date: February 20, 2014Applicant: IBM CORPORATIONInventors: Saravanan Devendran, Kiran Grover
-
Publication number: 20140052925Abstract: An information handling system includes a processor and a storage resource communicatively coupled to the processor. The processor is configured to determine if available overprovisioned storage of the storage resource is less than a threshold overprovisioned storage capacity, establish a new stated capacity for the storage resource in response to a determination that the available overprovisioned storage of the storage resource is less than the threshold overprovisioned storage capacity, and communicate to the processor an indication of the new stated capacity.Type: ApplicationFiled: August 17, 2012Publication date: February 20, 2014Applicant: DELL PRODUCTS L.P.Inventors: Gary B. Kotzur, Jason P. Gross
-
Publication number: 20140047190Abstract: In one embodiment, a computer system includes a cache having one or more memories and a metadata service. The metadata service is able to receive requests for data stored in the cache from a first client and from a second client. The metadata service is further able to determine whether the performance of the cache would be improved by relocating the data stored in the cache. The metadata service is further operable to relocate the data stored in the cache when such relocation would improve the performance of the cache.Type: ApplicationFiled: August 7, 2012Publication date: February 13, 2014Applicant: DELL PRODUCTS L.P.Inventors: William Price Dawkins, Jason Philip Gross, Noelan Ray Olson
-
Publication number: 20140019689Abstract: A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown.Type: ApplicationFiled: July 10, 2012Publication date: January 16, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Harold W. Cain, III, Vijayalakshmi Srinivasan, Jason Zebchuk
-
Patent number: 8627004Abstract: A method for data migration between each of a plurality of storage pools in a computing storage environment is provided. Each of the plurality of storage pools is categorized by a metric shared between data segments assigned to any one of the plurality of storage pools. The data segments are prioritized in the any one of the plurality of storage pools based on the metric. A discovery is performed for each of the plurality of storage pools, on a predetermined interval, based on the metric, whether a data segment with a highest priority on a child pool is greater than a data segment with a lowest priority on a parent pool. If so, the data segment with the highest priority on the child pool is promoted to the parent pool. A similar discovery process demotes the data segment with the highest priority on the parent pool to the child pool.Type: GrantFiled: January 7, 2010Date of Patent: January 7, 2014Assignee: International Business Machines CorporationInventor: David Montgomery
-
Publication number: 20130326168Abstract: Memory management methods and systems for mobile devices are provided. A memory usage of a memory is monitored by a built-in memory management component of an OS of the device and a user-oriented memory management component. It is determined whether the memory usage of the memory is greater than a first threshold or a second threshold, wherein the second threshold is greater than the first threshold. When the memory usage of the memory is greater than the first threshold, a multi-level memory management is performed by the user-oriented memory management component. When the memory usage of the memory is greater than the second threshold, a primitive memory management is performed by the built-in memory management component.Type: ApplicationFiled: May 31, 2012Publication date: December 5, 2013Inventors: Wen-Yen CHANG, Chih-Tsung Wu, Kao-Pin Chen, Ting-Lun Chen
-
Publication number: 20130311724Abstract: A cache system includes plurality of first caches at a first level of a cache hierarchy and a second cache at a second level of the cache hierarchy which is lower than the first level of cache hierarchy coupled to each of the plurality of first caches. The second cache enforces a cache line replacement policy in which the second cache selects a cache line for replacement based in part on whether the cache line is present in any of the plurality of first caches and in part on another factor.Type: ApplicationFiled: May 17, 2012Publication date: November 21, 2013Applicant: ADVANCED MICRO DEVICES, INC.Inventors: William L. Walker, Robert F. Krick, Tarun Nakra, Pramod Subramanyan
-
Publication number: 20130262777Abstract: A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss.Type: ApplicationFiled: March 28, 2012Publication date: October 3, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
-
Publication number: 20130262778Abstract: In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core.Type: ApplicationFiled: March 28, 2012Publication date: October 3, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sanjeev Ghai, Guy L. Guthrie, William J. Starke, Jeff A. Stuecheli, Derek E. Williams, Phillip G. Williams
-
Publication number: 20130219125Abstract: The present invention extends to methods, systems, and computer program products for implementing a cache using multiple page replacement algorithms. An exemplary cache can include two logical portions where the first portion implements the least recently used (LRU) algorithm and the second portion implements the least recently used two (LRU2) algorithm to perform page replacement within the respective portion. By implementing multiple algorithms, a more efficient cache can be implemented where the pages most likely to be accessed again are retained in the cache. Multiple page replacement algorithms can be used in any cache including an operating system cache for caching pages accessed via buffered I/O, as well as a cache for caching pages accessed via unbuffered I/O such as accesses to virtual disks made by virtual machines.Type: ApplicationFiled: February 21, 2012Publication date: August 22, 2013Applicant: MICROSOFT CORPORATIONInventors: Norbert P. Kusters, Andrea D'Amato, Vinod R. Shankar
-
Publication number: 20130219116Abstract: In one embodiment, a method for managing a composite storage device made up of fast non-volatile storage, such as a solid state device, and slower non-volatile storage, such as a traditional magnetic hard drive, can include maintaining a first data structure, which stores instances of recent access to each unit in a set of units in the fast non-volatile storage device, such as the SSD device and also maintaining a second data structure that indicates whether or not units in the slower storage device, such as the HDD, have been accessed at least a predetermined number of times. In one embodiment, the second data structure can be a probabilistic hash table, which has a low required memory overhead but is not guaranteed to always provide a correct answer with respect to whether a unit or block in the slower storage device has been referenced recently.Type: ApplicationFiled: September 6, 2012Publication date: August 22, 2013Inventors: Wenguang Wang, Peter Macko
-
Publication number: 20130219117Abstract: Approaches to managing a composite, non-volatile data storage device are described. In one embodiment, a method for managing a composite storage device made up of fast non-volatile storage, such as a solid state device, and slower non-volatile storage, such as a traditional magnetic hard drive, can include maintaining a first data structure, which stores instances of recent access to each unit in a set of units in the fast non-volatile storage device, such as the SSD device and also maintaining a second data structure that indicates whether or not units in the slower storage device, such as the HDD, have been accessed at least a predetermined number of times. In one embodiment, the second data structure can be a queue of Bloom filters.Type: ApplicationFiled: September 6, 2012Publication date: August 22, 2013Inventors: Peter Macko, Wenguang Wang
-
Publication number: 20130191599Abstract: A technique is provided for cache management of a cache. The processing circuit determines a miss count and a hit position field during a previous execution of an instruction requesting that a data element be stored in a cache. The miss count and the hit position field are stored for a data element corresponding to an instruction that requests storage of the data element. The processing circuit places the data element in a hierarchical order based on the miss count and/or the hit position field. The hit position field includes a hierarchical position related to the data element in the cache.Type: ApplicationFiled: January 20, 2012Publication date: July 25, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-Lung K. Shum
-
Publication number: 20130179640Abstract: In one embodiment, a method for controlling an instruction cache including a least-recently-used bits array, a tag array, and a data array, includes looking up, in the least-recently-used bits array, least-recently-used bits for each of a plurality of cacheline sets in the instruction cache, determining a most-recently-used way in a designated cacheline set of the plurality of cacheline sets based on the least-recently-used bits for the designated cacheline, looking up, in the tag array, tags for one or more ways in the designated cacheline set, looking up, in the data array, data stored in the most-recently-used way in the designated cacheline set, and if there is a cache hit in the most-recently-used way, retrieving the data stored in the most-recently-used way from the data array.Type: ApplicationFiled: January 9, 2012Publication date: July 11, 2013Applicant: NVIDIA CORPORATIONInventors: Aneesh Aggarwal, Ross Segelken, Kevin Koschoreck
-
Publication number: 20130111160Abstract: Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. Thus, data that otherwise may be evicted or demoted, but that meets or exceeds the utility metric threshold, is exempted from space reclamation and is instead maintained in the data storage memory.Type: ApplicationFiled: October 31, 2011Publication date: May 2, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: MICHAEL T. BENHASE, EVANGELOS S. ELEFTHERIOU, LOKESH M. GUPTA, ROBERT HAAS, XIAO-YU HU, MATTHEW J. KALOS, IOANNIS KOLTSIDAS, ROMAN A. PLETKA
-
Publication number: 20130111146Abstract: The population of data to be admitted into secondary data storage cache of a data storage system is controlled by determining heat metrics of data of the data storage system. If candidate data is submitted for admission into the secondary cache, data is selected to tentatively be evicted from the secondary cache; candidate data provided to the secondary data storage cache is rejected if its heat metric is less than the heat metric of the tentatively evicted data; and candidate data submitted for admission to the secondary data storage cache is admitted if its heat metric is equal to or greater than the heat metric of the tentatively evicted data.Type: ApplicationFiled: October 31, 2011Publication date: May 2, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. ASH, Michael T. BENHASE, Stephen L. BLINICK, Evangelos S. ELEFTHERIOU, Lokesh M. GUPTA, Robert HAAS, Xiao-Yu HU, Ioannis KOLTSIDAS, Roman A. PLETKA
-
Publication number: 20130086339Abstract: Method, apparatus, and systems employing novel delayed dictionary update schemes for dictionary-based high-bandwidth lossless compression. A pair of dictionaries having entries that are synchronized and encoded to support compression and decompression operations are implemented via logic at a compressor and decompressor. The compressor/decompressor logic operatives in a cooperative manner, including implementing the same dictionary update schemes, resulting in the data in the respective dictionaries being synchronized. The dictionaries are also configured with replaceable entries, and replacement policies are implemented based on matching bytes of data within sets of data being transferred over the link. Various schemes are disclosed for entry replacement, as well as a delayed dictionary update technique. The techniques support line-speed compression and decompression using parallel operations resulting in substantially no latency overhead.Type: ApplicationFiled: October 1, 2011Publication date: April 4, 2013Inventors: Ilan Pardo, Ido Y. Soffair, Dror Reif, Debendra Das Sharma, Akshay G. Pethe
-
Publication number: 20130046943Abstract: A row buffer 102 in DRAM 100 stores any data read from a memory array 101 in a specified data length unit. An LLC 206 is cache memory, and extracts and stores a part of data stored in the row buffer 102 as cache data. In a MAC 701, when push-out control of the LLC 206 is performed, it is predicted that data at which DIMM address is stored in the row buffer 102 in the near future based on the queuing state of an MRQ 203. In the MAC 701, each physical address of the cache data in a push-out target range 702 on the LLC 206 is converted into a DIMM address. If the converted address matches the predicted address of the data, then the cache data corresponding to the matching addresses is pushed out on a priority basis from the LLC 206.Type: ApplicationFiled: July 10, 2012Publication date: February 21, 2013Applicant: FUJITSU LIMITEDInventors: Takatsugu Ono, Takeshi Shimizu
-
Patent number: 8364898Abstract: A method and a system for utilizing less recently used (LRU) bits and presence bits in selecting cache-lines for eviction from a lower level cache in a processor-memory sub-system. A cache back invalidation (CBI) logic utilizes LRU bits to evict only cache-lines within a LRU group, following a cache miss in the lower level cache. In addition, the CBI logic uses presence bits to (a) indicate whether a cache-line in a lower level cache is also present in a higher level cache and (b) evict only cache-lines in the lower level cache that are not present in a corresponding higher level cache. However, when the lower level cache-line selected for eviction is also present in any higher level cache, CBI logic invalidates the cache-line in the higher level cache. The CBI logic appropriately updates the values of presence bits and LRU bits, following evictions and invalidations.Type: GrantFiled: January 23, 2009Date of Patent: January 29, 2013Assignee: International Business Machines CorporationInventors: Ganesh Balakrishnan, Anil Krishna
-
Publication number: 20130024624Abstract: Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request.Type: ApplicationFiled: July 22, 2011Publication date: January 24, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, James L. Hafner
-
Patent number: 8332586Abstract: The present invention obtains with high precision, in a storage system, the effect of additional installation or removal of cache memory, that is, the change of the cache hit rate and the performance of the storage system at that time. For achieving this, when executing normal cache control in the operational environment of the storage system, the cache hit rate when the cache memory capacity has changed is also obtained. Furthermore, with reference to the obtained cache hit rate, the peak performance of the storage system is obtained. Furthermore, with reference to the target performance, the cache memory and the number of disks and other resources that are additionally required are obtained.Type: GrantFiled: March 30, 2009Date of Patent: December 11, 2012Assignee: Hitachi, Ltd.Inventors: Masanori Takada, Shuji Nakamura, Kentaro Shimada
-
Patent number: 8327076Abstract: The disclosure is related to data storage systems having multiple cache and to management of cache activity in data storage systems having multiple cache. In a particular embodiment, a data storage device includes a volatile memory having a first read cache and a first write cache, a non-volatile memory having a second read cache and a second write cache and a controller coupled to the volatile memory and the non-volatile memory. The memory can be configured to selectively transfer read data from the first read cache to the second read cache based on a least recently used indicator of the read data and selectively transfer write data from the first write cache to the second write cache based on a least recently written indicator of the write data.Type: GrantFiled: May 13, 2009Date of Patent: December 4, 2012Assignee: Seagate Technology LLCInventors: Robert D. Murphy, Robert W. Dixon, Steven S. Williams
-
Publication number: 20120303905Abstract: In embodiments of the present invention, a file access request sent by an application to a hard disk is obtained, file information of the accessed file is acquired according to the request, the file accessed by the application is fragmented to obtain at least one file fragment, a condition for copying the file fragment from the hard disk to the cache is set, and the file fragment is copied to the cache when the copying condition is met in a storage unit. Compared with a technical solution in the prior art where the file is copied to the cache, utilization efficiency of the cache is effectively improved.Type: ApplicationFiled: August 9, 2012Publication date: November 29, 2012Inventors: Wei ZHANG, Mingchang WEI, Zhixin CHEN
-
Publication number: 20120303872Abstract: Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.Type: ApplicationFiled: April 25, 2012Publication date: November 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Matthew J. Kalos
-
Publication number: 20120290795Abstract: System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable.Type: ApplicationFiled: July 9, 2012Publication date: November 15, 2012Applicant: AT&T MOBILITY II LLCInventor: Sangar Dowlatkhah
-
Publication number: 20120272010Abstract: A process for caching data in a cache memory includes upon detecting that a first page is in a first or second list, the first page is moved to a most recently used (MRU) position in the second list. Upon detecting that the first page is in a first history list, a first target size is updated to a second target size for the first and second lists, the first page is moved from the first history list to the MRU position in the second list, and the first page is fetched to the cache memory. Upon detecting that the first page is in a second history list, the second target size is updated to a third target size for the first and second lists, and the first page is moved from the second history list to the MRU position in the second list.Type: ApplicationFiled: July 3, 2012Publication date: October 25, 2012Applicant: International Business Machines CorporationInventors: James Allen Larkby-Lahet, Prashant Pandey