Combined Replacement Modes Patents (Class 711/134)
-
Patent number: 10067981Abstract: A framework for intelligent memory replacement of loaded data blocks by requested data blocks is provided. For example, various factors are taken into account to optimize the selection of loaded data blocks to be discarded from the memory, in favor of the requested data blocks to be loaded into the memory. In some implementations, correlations between the requested data blocks and the loaded data blocks are used to determine which of the loaded data blocks may become candidates to be discarded from memory.Type: GrantFiled: November 21, 2014Date of Patent: September 4, 2018Assignee: SAP SEInventors: Nairu Fan, Tianyu Luwang, Conglun Yao, Wen-Syan Li
-
Patent number: 9996469Abstract: The invention introduces a method for prefetching data, which contains at least the following steps: receiving a first read request and a second read request from a first LD/ST (Load/Store) queue and a second LD/ST queue, respectively, in parallel; obtaining a first cache-line number and a first offset from the first read request and a second cache-line number of a second offset from the second read request in parallel; obtaining a third cache-line number from a cache-line number register; obtaining a third offset from an offset register; determining whether an offset trend is formed according to the first to third cache-line numbers and the first to third offsets; and directing an L1 (Level-1) data cache to prefetch data of a cache line when the offset trend is formed.Type: GrantFiled: December 2, 2016Date of Patent: June 12, 2018Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.Inventor: Chen Chen
-
Patent number: 9940239Abstract: A set-associative cache memory includes a bank of counters including a respective one of a plurality of counters for each cache line stored in a plurality of congruence classes of the cache memory. Prior to receiving a memory access request that maps to a particular congruence class of the cache memory, the cache memory pre-selects a first victim cache line stored in a particular entry of a particular congruence class for eviction based on at least a counter value of the victim cache line. In response to receiving a memory access request that maps to the particular congruence class and that misses, the cache memory evicts the pre-selected first victim cache line from the particular entry, installs a new cache line in the particular entry, and pre-selects a second victim cache line from the particular congruence class based on at least a counter value of the second victim cache line.Type: GrantFiled: October 7, 2016Date of Patent: April 10, 2018Assignee: International Business Machines CorporationInventors: Bernard C. Drerup, Ram Raghavan, Sahil Sabharwal, Jeffrey A. Stuecheli
-
Patent number: 9940246Abstract: In one embodiment, a set-associative cache memory has a plurality of congruence classes each including multiple entries for storing cache lines of data. The cache memory includes a bank of counters, which includes a respective one of a plurality of counters for each cache line stored in the plurality of congruence classes. The cache memory selects victim cache lines for eviction from the cache memory by reference to counter values of counters within the bank of counters. A dynamic distribution of counter values of counters within the bank of counters is determined. In response, an amount counter values of counters within the bank of counters are adjusted on a cache miss is adjusted based on the dynamic distribution of the counter values.Type: GrantFiled: October 7, 2016Date of Patent: April 10, 2018Assignee: International Business Machines CorporationInventors: Bernard C. Drerup, Ram Raghavan, Sahil Sabharwal, Jeffrey A. Stuecheli
-
Patent number: 9934231Abstract: Implementations described and claimed herein provide a system and methods for prioritizing data in a cache. In one implementation, a priority level, such as critical, high, and normal, is assigned to cached data. The priority level dictates how long the data is cached and consequently, the order in which the data is evicted from the cache memory. Data assigned a priority level of critical will be resident in cache memory unless heavy memory pressure causes the system to reclaim memory and all data assigned a priority state of high or normal has been evicted. High priority data is cached longer than normal priority data, with normal priority data being evicted first. Accordingly, important data assigned a priority level of critical, such as a deduplication table, is kept resident in cache memory at the expense of other data, regardless of the frequency or recency of use of the data.Type: GrantFiled: December 22, 2014Date of Patent: April 3, 2018Assignee: Oracle International CorporationInventors: Mark Maybee, Lisa Week
-
Patent number: 9921974Abstract: Provided are a computer program product, system, and method for assigning cache control blocks and cache lists to multiple processors to cache and demote tracks in a storage system. Cache control blocks are assigned to processors. A track added to the cache for one of the processors is assigned one of the cache control blocks assigned to the processor. There are a plurality of lists one list for each of the processors and the cache control blocks assigned to the processor. A track to add to cache for a request is received from an initiating processor comprising one of the processors. One of the cache control blocks assigned to the initiating processor is allocated for the track to add to the cache. The track to add to the cache is indicated on the list for the initiating processor.Type: GrantFiled: August 21, 2015Date of Patent: March 20, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Matthew J. Kalos
-
Patent number: 9910599Abstract: A receiver unit receives data write commands for a memory device. A control unit determines a use situation of an RMW cache used in a read-modify-write process by the memory device, on the basis of write sizes, a reception frequency, and the number of received commands. A control unit decides whether or not to execute a read-modify-write process by a storage control apparatus on the basis of the determination result.Type: GrantFiled: December 7, 2015Date of Patent: March 6, 2018Assignee: FUJITSU LIMITEDInventors: Tsunemichi Harada, Masatoshi Nakamura, Atsushi Igashira, Hideo Takahashi
-
Patent number: 9819554Abstract: At a service on a device, for a first property including a first one or more resources: maintaining first invalidation information relating to resources associated with the first property in the memory on the device; and controlling receipt of invalidation information relating to the first property based on the an amount of space in the memory used by the invalidation information.Type: GrantFiled: November 23, 2013Date of Patent: November 14, 2017Assignee: Level 3 Communications, LLCInventors: Lewis Robert Varney, Laurence R. Lipstone, William Crowder, Andrew Swart, Christopher Newton
-
Patent number: 9779029Abstract: Various cache replacement policies are described whose goals are to identify items for eviction from the cache that are not accessed often and to identify items stored in the cache that are regularly accessed that should be maintained longer in the cache. In particular, the cache replacement policies are useful for workloads that have a strong temporal locality, that is, items that are accessed very frequently for a period of time and then quickly decay in terms of further accesses. In one embodiment, a variation on the traditional least recently used caching algorithm uses a reuse period or reuse distance for an accessed item to determine whether the item should be promoted in the cache queue. In one embodiment, a variation on the traditional two queue caching algorithm evicts items from the cache from both an active queue and an inactive queue.Type: GrantFiled: November 6, 2012Date of Patent: October 3, 2017Assignee: Facebook, Inc.Inventors: Eitan Frachtenberg, Yuehai Xu
-
Patent number: 9729603Abstract: A method comprises associating at least one cache replacement granularity value with a given one of a plurality of content streams comprising a number of segments, receiving a request for a given segment of the given content stream in a network element, identifying a given portion of the given content stream which contains the given segment, updating a value corresponding to the given portion of the given content stream, and determining whether to store the given portion of the given content stream in a memory of the network element based at least in part on the updated value corresponding to the given portion. The at least one cache replacement granularity value represents a given number of segments, the given content stream being separable into one or more portions based at least in part on the at least one cache replacement granularity value.Type: GrantFiled: September 27, 2012Date of Patent: August 8, 2017Assignee: Alcatel LucentInventors: Andre Beck, Jairo O. Esteban, Steven A. Benno, Volker F. Hilt, Ivica Rimac, Yang Guo
-
Patent number: 9652395Abstract: In one aspect, a device includes a processor, memory accessible to the processor, and storage accessible to the processor. The storage bears instructions executable by the processor to determine a context associated with the device and at least in part based on the determination, configure a standby portion of the memory.Type: GrantFiled: May 8, 2015Date of Patent: May 16, 2017Assignee: Lenovo (Singapore) Pte. Ltd.Inventors: John Carl Mese, Arnold S. Weksler, Rod D. Waltermann, Nathan J. Peterson, Russell Speight VanBlon
-
Patent number: 9594700Abstract: A method and a system are provided for controlling memory accesses. Memory access requests including at least a first speculative memory access request and a first non-speculative memory access request are received and a memory access request is selected from the memory access requests. A memory access command is generated to process the selected memory access request.Type: GrantFiled: April 17, 2013Date of Patent: March 14, 2017Assignee: NVIDIA CorporationInventor: William J. Dally
-
Patent number: 9535850Abstract: A method and apparatus are provided in which a host device and a peripheral device are adapted to perform efficient data transfers. The host receives one or more bytes in a memory transfer from the peripheral device, and determines an operation for modifying the memory transfer without reading the one or more bytes. Rather, the modification may be performed based on information in a header accompanying the transferred bytes. The host device modifies the memory transfer based on the determination, and writes the modified memory transfer to memory.Type: GrantFiled: January 28, 2015Date of Patent: January 3, 2017Assignee: Google Inc.Inventor: Andrew Gallatin
-
Patent number: 9519588Abstract: Cache lines of a data cache may be assigned to a specific page type or color. In addition, the computing system may monitor when a cache line assigned to the specific page color is allocated in the cache. As each cache line assigned to a particular page color is allocated, the computing system may compare a respective index associated with each of the cache lines to determine maximum and minimum indices for that page color. These indices define a block of the cache that stores the data assigned to the page color. Thus, when the data of a page color is evicted from the cache, instead of searching the entire cache to locate the cache lines, the computing system uses the maximum and minimum indices as upper and lower bounds to reduce the portion of the cache that is searched.Type: GrantFiled: February 24, 2016Date of Patent: December 13, 2016Assignee: CISCO TECHNOLOGY, INC.Inventor: Donald Edward Steiss
-
Patent number: 9513818Abstract: A tape drive adapted for providing a best access order for files or data sets on a tape loaded into the tape drive. The tape drive includes a processor and memory storing a file location table for the tape. The file location table includes identifiers for a plurality of files on the tape and location information for the plurality of files on the tape. The tape drive includes an order determination module, executed by the processor, processing an order request. The order request, from a host or user, includes a list of the files on the tape from which to generate, based on the location information in the file location table, a reordered list defining an order for accessing the files on the tape. The reordered list or best access order has (or produces via tape drive access) an access time for the files that is minimal or reduced.Type: GrantFiled: May 28, 2014Date of Patent: December 6, 2016Assignee: ORACLE INTERNATIONAL CORPORATIONInventor: Bradley Edwin Whitney
-
Patent number: 9483310Abstract: Systems, methods, and software described herein provide accelerated input and output of data in a work process. In one example, a method of operating a support process within a computing system for providing accelerated input and output for a work process includes monitoring for a file mapping attempt initiated by the work process. The method further includes, in response to the file mapping attempt, identifying a first region in memory already allocated to a cache service, and associating the first region in memory with the work process.Type: GrantFiled: April 29, 2014Date of Patent: November 1, 2016Assignee: BLUEDATA SOFTWARE, INC.Inventor: Michael J. Moretti
-
Patent number: 9462074Abstract: An apparatus and method to enhance existing caches in a network to better support streaming media storage and distribution. Helper machines are used inside the network to implement several methods which support streaming media including segmentation of streaming media objects into smaller units, cooperation of Helper machines, and novel placement and replacement policies for segments of media objects.Type: GrantFiled: October 7, 2015Date of Patent: October 4, 2016Assignee: Sound View Innovations, LLCInventors: Katherine H. Guo, Sanjoy Paul, Tze Sing Eugene Ng, Hui Zhang, Markus A. Hofmann
-
Patent number: 9402058Abstract: In order to stably deliver content data over a network, a content delivery system is provided with: a content retention module for storing content data consisting of hierarchically encoded hierarchical data; a cache retention module for caching content data; a hierarchical score determination module for calculating an access requirement frequency for each piece of cached hierarchical data; a hierarchical arrangement determination module for replacing hierarchical data having an access requirement frequency lower than a fixed value with the hierarchical data stored in the content retention module; and a content delivery module for delivering content data in response to requests from a client device.Type: GrantFiled: July 22, 2010Date of Patent: July 26, 2016Assignee: NEC CORPORATIONInventor: Eiji Takahashi
-
Patent number: 9390052Abstract: Embodiments of a distributed caching system are disclosed that cache data across multiple computing devices on a network. In one embodiment, a first cache system serves as a caching front-end to a distributed cluster of additional cache systems. The first cache system can distribute cache requests to the additional cache systems. The first distributed caching system can also serve as a cache server itself, by storing data on its own internal cache. For example, the first cache system can first attempt to find a requested data item on the internal cache, but, if the lookup results in a cache miss, the first cache system can search the additional cache systems for the data. In some embodiments, the first cache system is configured to multiplex requests to each additional cache system over a single negotiated streaming protocol connection, which allows for network efficiencies and faster detection of failure.Type: GrantFiled: December 19, 2012Date of Patent: July 12, 2016Assignee: Amazon Technologies, Inc.Inventors: Vishal Parakh, Antoun Joubran Kanawati
-
Patent number: 9390128Abstract: A computer system and method is disclosed for storing large volumes of event data. The system receives access event logs including indications of access events to files stored on a set of storage devices. Each indication includes respective values for a plurality of access event attributes. The system uses the indications to store multiple segment files, each corresponding to a respective subset of the indications. Each segment file stores data as multiple tiles, where each tile includes a compressed copy of those access event indications of the segment file that have a shared value for one of the access event attributes. Each tile is stored contiguously within a set of storage devices.Type: GrantFiled: April 16, 2010Date of Patent: July 12, 2016Assignee: Symantec CorporationInventor: Partha Seetala
-
Patent number: 9374289Abstract: A system comprising: one or more server devices to: set a first network threshold level for determining network congestion in a network; set rate limiting criteria for determining when one or more subscribers will be rate limited; detect an increase in network congestion at a base station above the first network threshold level; identify one or more subscribers meeting one or more of the rate limiting criteria; rate limit network traffic associated with the one or more subscribers; detect a decrease in network congestion at the base station below a second network threshold level; and remove the rate limiting of the network traffic associated with the one or more subscribers.Type: GrantFiled: February 28, 2012Date of Patent: June 21, 2016Assignees: Verizon Patent and Licensing Inc., Cellco PartnershipInventors: Lalit R. Kotecha, John F. Macias, Patricia R. Chang, David Chiang
-
Patent number: 9244851Abstract: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.Type: GrantFiled: January 22, 2013Date of Patent: January 26, 2016Assignee: International Business Machines CorporationInventors: Ekaterina M. Ambroladze, Michael A. Blake, Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Arthur J. O'Neill
-
Patent number: 9176879Abstract: A mechanism for evicting a cache line from a cache memory includes first selecting for eviction a least recently used cache line of a group of invalid cache lines. If all cache lines are valid, selecting for eviction a least recently used cache line of a group of cache lines in which no cache line of the group of cache lines is also stored within a higher level cache memory such as the L1 cache, for example. Lastly, if all cache lines are valid and there are no non-inclusive cache lines, selecting for eviction the least recently used cache line stored in the cache memory.Type: GrantFiled: July 19, 2013Date of Patent: November 3, 2015Assignee: Apple Inc.Inventors: Brian P. Lilly, Gerard R. Williams, III, Mahnaz Sadoughi-Yarandi, Perumal R. Subramonium, Hari S. Kannan, Prashant Jain
-
Patent number: 9128892Abstract: One embodiment of the present invention sets forth a method for updating content stored in a cache residing at an internet service provider (ISP) location that includes receiving popularity data associated with a first plurality of content assets, where the popularity data indicate the popularity of each content asset in the first plurality of content assets across a user base that spans multiple geographic regions, generating a manifest that includes a second plurality of content assets based on the popularity data and a geographic location associated with the cache, where each content asset included in the manifest is determined to be popular among users proximate to the geographic location or users with preferences similar to users proximate to the geographic location, and transmitting the manifest to the cache, where the cache is configured to update one or more content assets stored in the cache based on the manifest.Type: GrantFiled: December 10, 2012Date of Patent: September 8, 2015Assignee: NETFLIX, INC.Inventors: David Fullagar, Kenneth W. Florance, Ian Van Hoven
-
Patent number: 9128966Abstract: Aspects provide a method of determining a storage location for a data item, including providing first and second data storage locations, the first location having an appreciably faster access speed than the second, the data storage locations are primary storage locations providing persistent storage, accessing a score associated with the data item, the score being calculated based on a frequency of access; and selecting only one of the storage locations based on the score with respect to other data scores, wherein the data item is stored in only one of the storage locations at any time, re-calculating the scores, wherein the score is accessed from a score table of data items; and in response to re-calculating of the scores, causing a change in the selection of the data storage location, removing the data item from a current storage location and adding the data item to a newly selected storage location.Type: GrantFiled: October 4, 2013Date of Patent: September 8, 2015Assignee: International Business Machines CorporationInventors: Utz Bacher, Akshay V. Rao, Thomas Spatzier
-
Patent number: 9104599Abstract: Apparatuses, systems, methods, and computer program products are disclosed for destaging cached data. A method includes caching write in a nonvolatile solid-state cache by appending the data to a log of the nonvolatile solid-state cache. The log includes a sequential, log-based structure preserved in the nonvolatile solid-state cache. A method includes destaging at least a portion of the data from the nonvolatile solid-state cache to the backing store in a cache log order. The cache log order comprises an order in which the data was appended to the log of the nonvolatile solid-state cache.Type: GrantFiled: April 15, 2011Date of Patent: August 11, 2015Assignee: Intelligent Intellectual Property Holdings 2 LLCInventors: David Atkisson, David Flynn
-
Patent number: 9086972Abstract: A computer program product, system, and method for managing metadata for caching devices during shutdown and restart procedures. Fragment metadata for each fragment of data from the storage server stored in the cache device is generated. The fragment metadata is written to at least one chunk of storage in the cache device in a metadata directory in the cache device. For each of the at least one chunk in the cache device to which the fragment metadata is written, chunk metadata is generated for the chunk and writing the generated chunk metadata to the metadata directory in the cache device. Header metadata having information on access of the storage server is written to the metadata directory in the cache device. The written header metadata, chunk metadata, and fragment metadata are used to validate the metadata directory and the fragment data in the cache device during a restart operation.Type: GrantFiled: July 8, 2013Date of Patent: July 21, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Stephen L. Blinick, Clement L. Dickey, Xioa-Yu Hu, Nikolas Ioannou, Ioannis Koltsidas, Paul H. Muench, Roman Pletka, Sangeetha Seshadri
-
Patent number: 9081501Abstract: A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC).Type: GrantFiled: January 10, 2011Date of Patent: July 14, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sameh Asaad, Ralph E. Bellofatto, Michael A. Blocksome, Matthias A. Blumrich, Peter Boyle, Jose R. Brunheroto, Dong Chen, Chen-Yong Cher, George L. Chiu, Norman Christ, Paul W. Coteus, Kristan D. Davis, Gabor J. Dozsa, Alexandre E. Eichenberger, Noel A. Eisley, Matthew R. Ellavsky, Kahn C. Evans, Bruce M. Fleischer, Thomas W. Fox, Alan Gara, Mark E. Giampapa, Thomas M. Gooding, Michael K. Gschwind, John A. Gunnels, Shawn A. Hall, Rudolf A. Haring, Philip Heidelberger, Todd A. Inglett, Brant L. Knudson, Gerard V. Kopcsay, Sameer Kumar, Amith R. Mamidala, James A. Marcella, Mark G. Megerian, Douglas R. Miller, Samuel J. Miller, Adam J. Muff, Michael B. Mundy, John K. O'Brien, Kathryn M. O'Brien, Martin Ohmacht, Jeffrey J. Parker, Ruth J. Poole, Joseph D. Ratterman, Valentina Salapura, David L. Satterfield, Robert M. Senger, Brian Smith, Burkhard Steinmacher-Burow, William M. Stockdell, Craig B. Stunkel, Krishnan Sugavanam, Yutaka Sugawara, Todd E. Takken, Barry M. Trager, James L. Van Oosten, Charles D. Wait, Robert E. Walkup, Alfred T. Watson, Robert W. Wisniewski, Peng Wu
-
Patent number: 9069678Abstract: A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.Type: GrantFiled: July 26, 2011Date of Patent: June 30, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta, Joseph S. Hyde, II, Lee C. LaFrese
-
Patent number: 9043550Abstract: A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.Type: GrantFiled: November 6, 2013Date of Patent: May 26, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael T. Benhase, Lokesh M. Gupta
-
Patent number: 9037803Abstract: In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool.Type: GrantFiled: March 6, 2013Date of Patent: May 19, 2015Assignee: International Business Machines CorporationInventors: Sam S. Lightstone, Adam J. Storm
-
Patent number: 9032158Abstract: A method of identifying a cache line of a cache memory (180) for replacement, is disclosed. Each cache line in the cache memory has a stored sequence number and a stored transaction data stream identifying label. A request (e.g., 400) associated with a label identifying a transaction data stream is received. The label corresponds to the stored transaction data stream identifying label of the cache line. The stored sequence number of the cache line is compared with a response sequence number. The response sequence number is associated with the stored transaction data stream identifying label of the cache line. The cache line is identified for replacement based on the comparison.Type: GrantFiled: April 26, 2011Date of Patent: May 12, 2015Assignee: Canon Kabushiki KaishaInventor: David Charles Ross
-
Patent number: 9032157Abstract: Disclosed is a computer system (100) comprising a processor unit (110) adapted to run a virtual machine in a first operating mode; a cache (120) accessible to the processor unit, said cache comprising a plurality of cache rows (1210), each cache row comprising a cache line (1214) and an image modification flag (1217) indicating a modification of said cache line caused by the running of the virtual machine; and a memory (140) accessible to the cache controller for storing an image of said virtual machine; wherein the processor unit comprises a replication manager adapted to define a log (200) in the memory prior to running the virtual machine in said first operating mode; and said cache further includes a cache controller (122) adapted to periodically check said image modification flags; write only the memory address of the flagged cache lines in the defined log and subsequently clear the image modification flags.Type: GrantFiled: December 11, 2012Date of Patent: May 12, 2015Assignee: International Business Machines CorporationInventors: Sanjeev Ghai, Guy L. Guthrie, Geraint North, William J. Starke, Phillip G. Williams
-
Patent number: 9026735Abstract: Systems and methods are provided for a hardware-implemented multi-buffer. A system includes a buffer memory comprising a shared memory space, where the memory space is shared between a first buffer and a second buffer, and where a dynamic delineation of the memory space between the first buffer and the second buffer is identified by a divider address. A dynamic buffer control circuit includes a control memory that is configured to store the divider address, a first memory utilization metric associated with the first buffer, and a second memory utilization metric associated with the second buffer. A system further includes one or more comparator circuits configured to compare the first memory utilization metric and the second memory utilization metric, where the dynamic buffer control circuit changes the divider address based on the comparison.Type: GrantFiled: November 15, 2012Date of Patent: May 5, 2015Assignee: Marvell Israel (M.I.S.L.) Ltd.Inventors: Ruven Torok, Oren Shafrir
-
Patent number: 9021205Abstract: A mechanism for page replacement for cache memory is disclosed. A method of the disclosure includes referencing an entry of a data structure of a cache in memory to identify a stored value of an eviction counter, the stored value of the eviction counter placed in the entry when a page of a file previously stored in the cache was evicted from the cache, determining a refault distance of the page of the file based on a difference between the stored value of the eviction counter and a current value of the eviction counter, and adjusting a ratio of cache lists maintained by the processing device to track pages in the cache, the adjusting based on the determined refault distance.Type: GrantFiled: November 30, 2012Date of Patent: April 28, 2015Assignee: Red Hat, Inc.Inventor: Johannes Weiner
-
Publication number: 20150106570Abstract: A cache apparatus stores part of a plurality of accessible data blocks into a cache area. A calculation part calculates, for each pair of data blocks of the plurality of data blocks, an expected value of the number of accesses made after one of the data blocks is accessed until the other of the data blocks is accessed, on the basis of a probability that when each of the plurality of data blocks is accessed, each data block that is likely to be accessed next is accessed next. When a data block is read from outside the cache area, a determination part determines a data block to be discarded from the cache area, on the basis of the expected value of the number of accesses made after the read data block is accessed until each of the plurality of data blocks is accessed.Type: ApplicationFiled: September 23, 2014Publication date: April 16, 2015Inventor: Toshihiro Shimizu
-
Patent number: 9009409Abstract: A method to store objects in a memory cache is disclosed. A request is received from an application to store an object in a memory cache associated with the application. The object is stored in a cache region of the memory cache based on an identification that the object has no potential for storage in a shared memory cache and a determination that the cache region is associated with a storage policy that specifies that objects to be stored in the cache region are to be stored in a local memory cache and that a garbage collector is not to remove objects stored in the cache region from the local memory cache.Type: GrantFiled: July 12, 2011Date of Patent: April 14, 2015Assignee: SAP SEInventors: Galin Galchev, Frank Kilian, Oliver Luik, Dirk Marwinski, Petio G. Petev
-
Patent number: 9003126Abstract: Techniques and mechanisms for adaptively changing between replacement policies for selecting lines of a cache for eviction. In an embodiment, evaluation logic determines a value of a performance metric which is for writes to a non-volatile memory. Based on the determined value of the performance metric, a parameter value of a replacement policy is determined. In another embodiment, cache replacement logic performs a selection of a line of cache for data eviction, where the selection is in response to the policy unit providing an indication of the determined parameter value.Type: GrantFiled: September 25, 2012Date of Patent: April 7, 2015Assignee: Intel CorporationInventors: Qiong Cai, Nevin Hyuseinova, Serkan Ozdemir, Ferad Zyulkyarov, Marios Nicolaides, Blas Cuesta
-
Patent number: 9003125Abstract: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.Type: GrantFiled: June 14, 2012Date of Patent: April 7, 2015Assignee: International Business Machines CorporationInventors: Ekaterina M. Ambroladze, Michael Blake, Tim Bronson, Garrett Drapala, Pak-kin Mak, Arthur J. O'Neill
-
Patent number: 8990504Abstract: A cache page management method can include paging out a memory page to an input/output controller, paging the memory page from the input/output controller into a real memory, modifying the memory page in the real memory to an updated memory page and purging the memory page paged to the input/output controller.Type: GrantFiled: July 11, 2011Date of Patent: March 24, 2015Assignee: International Business Machines CorporationInventors: Tara Astigarraga, Michael E. Browne, Joseph Demczar, Eric C. Wieder
-
Patent number: 8990524Abstract: A plurality of subgroups with a least recently used (LRU) list of data elements associated with count variables. The LRU lists have a top entry to store a most recently used data element and a bottom entry to store a least recently used data element. If a data element is accessed, then increase the value of the count variable and move the accessed data element to the top entry of the LRU list of the subgroup associated with the data element. If the value of the count variable of the accessed data element of the top entry is greater than a value of a count variable of a data element of a bottom entry of a LRU list of a subgroup with a higher priority, then swap the data element of the bottom entry with the accessed data element of the top entry.Type: GrantFiled: September 27, 2012Date of Patent: March 24, 2015Assignee: Hewlett-Packard Development Company, LP.Inventor: Mykel John Kramer
-
Patent number: 8984230Abstract: A method of using a buffer within an indexing accelerator during periods of inactivity, comprising flushing indexing specific data located in the buffer, disabling a controller within the indexing accelerator, handing control of the buffer over to a higher level cache, and selecting one of a number of operation modes of the buffer. An indexing accelerator, comprising a controller and a buffer communicatively coupled to the controller, in which, during periods of inactivity, the controller is disabled and a buffer operating mode among a number of operating modes is chosen under which the buffer will be used.Type: GrantFiled: January 30, 2013Date of Patent: March 17, 2015Assignee: Hewlett-Packard Development Company, L.P.Inventors: Onur Kocberber, Kevin T. Lim, Parthasarathy Ranganathan
-
Patent number: 8966180Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: March 1, 2013Date of Patent: February 24, 2015Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8954674Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: October 8, 2013Date of Patent: February 10, 2015Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8954653Abstract: A data storage system configured to efficiently manage system data, efficiently organize system data, and reduce system data redundancy is disclosed. In one embodiment, the data storage system can maintain memory allocation information configured to track defective allocation units. Memory allocation information can be further configured to provide information for locating the memory allocation units or memory locations in physical memory. Separate information that indicates locations of the data allocation units or memory locations and/or records defective memory locations may not be needed. Hence, redundancy can be reduced, efficiency can be increased, and improved performance can be attained.Type: GrantFiled: June 26, 2012Date of Patent: February 10, 2015Assignee: Western Digital Technologies, Inc.Inventors: Jerry Lo, Johnny A. Lam
-
Patent number: 8949541Abstract: A method for cleaning dirty data in an intermediate cache is disclosed. A dirty data notification, including a memory address and a data class, is transmitted by a level 2 (L2) cache to frame buffer logic when dirty data is stored in the L2 cache. The data classes may include evict first, evict normal and evict last. In one embodiment, data belonging to the evict first data class is raster operations data with little reuse potential. The frame buffer logic uses a notification sorter to organize dirty data notifications, where an entry in the notification sorter stores the DRAM bank page number, a first count of cache lines that have resident dirty data and a second count of cache lines that have resident evict_first dirty data associated with that DRAM bank. The frame buffer logic transmits dirty data associated with an entry when the first count reaches a threshold.Type: GrantFiled: November 14, 2011Date of Patent: February 3, 2015Assignee: NVIDIA CorporationInventors: David B. Glasco, Peter B. Holmqvist, George R. Lynch, Patrick R. Marchand, James Roberts, John H. Edmondson
-
Patent number: 8943275Abstract: Systems, methods and a computer program product the differential storage and eviction for information resources from a browser cache. In an embodiment, the present invention provides differential storage and eviction for information resources by storing fetched resources in a memory and assigning, with a processor, a persistence score to the resources. Further embodiments relocate the resources from a sub-cache to a different sub-cache based on their persistence score, and remove the resource from the memory based on the persistence score.Type: GrantFiled: March 25, 2013Date of Patent: January 27, 2015Assignee: Google Inc.Inventors: Jim Roskind, Jose Ricardo Vargas Puentes, Ashit Kumar Jain, Evan Martin
-
Patent number: 8943261Abstract: The use of heap memory is optimized by extending a cache implementation with a CacheInterface base class. An instance of a ReferenceToCache is attached to the CacheInterface base class. The cache implementation is registered to a garbage collector application. The registration is stored as a reference list in a memory. In response to an unsatisfied cache allocation request, a garbage collection cycle is triggered to check heap occupancy. In response to exceeding a threshold value, the reference list is traversed for caches to be cleaned based upon a defined space constraint value. The caches are cleaned in accordance with the defined space constraint value.Type: GrantFiled: October 28, 2011Date of Patent: January 27, 2015Assignee: International Business Machines CorporationInventors: Avinash Koradhanyamath, Shirish T. S. Kuncolienkar, Ajith Ramanath
-
Patent number: 8930630Abstract: The present disclosure relates to a cache memory controller for controlling a set-associative cache memory, in which two or more blocks are arranged in the same set, the cache memory controller including a content modification status monitoring unit for monitoring whether some of the blocks arranged in the same set of the cache memory have been modified in contents, and a cache block replacing unit for replacing a block, which has not been modified in contents, if some of the blocks arranged in the same set have been modified in contents.Type: GrantFiled: September 2, 2009Date of Patent: January 6, 2015Assignee: Sejong University Industry Academy Cooperation FoundationInventor: Gi Ho Park
-
Patent number: 8904116Abstract: A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache.Type: GrantFiled: April 1, 2014Date of Patent: December 2, 2014Assignee: Google Inc.Inventors: Timo Burkard, David Presotto