Least Recently Used Patents (Class 711/136)
  • Patent number: 10430188
    Abstract: Executing a Next Instruction Access Intent instruction by a computer. The processor obtains an access intent instruction indicating an access intent. The access intent is associated with an operand of a next sequential instruction. The access intent indicates usage of the operand by one or more instructions subsequent to the next sequential instruction. The computer executes the access intent instruction. The computer obtains the next sequential instruction. The computer executes the next sequential instruction, whose execution comprises, based on the access intent, adjusting one or more cache behaviors for the operand of the next sequential instruction.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: October 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Jacobi, Chung-Lung Kevin Shum, Timothy J Siegel, Gustav E Sittmann, III
  • Patent number: 10409504
    Abstract: Embodiments of the disclosure provide a method, a computer [program product and apparatus for a soft-switch in a storage system, by setting data in a source of the soft-switch to be read-only and starting a replication process of the data to a destination of the soft-switch in response to a soft-switch request; recording at the source an update operation for the data during the replication process and synchronously recording the update operation into the destination; updating the replicated data at the destination with the synchronously recorded update operation in response to the completion of the replication process; and disabling a data access to the source and enabling a data access to the destination.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Bernie Bo Hu, Bob Biao Yan, Jia Huang, Ming Yue, Adam Yu Zhang
  • Patent number: 10324809
    Abstract: Techniques related to cache recovery for failed database instances are disclosed. A first database instance and a second database instance share a primary persistent storage and a secondary persistent storage. Each database instance stores, in volatile memory, a respective primary cache of a respective set of data stored on the primary persistent storage. Each database instance also stores, in volatile memory, a respective set of header data. Further, each database instance moves the respective set of data from the respective primary cache to a respective secondary cache on the secondary persistent storage. Still further, each database instance stores, on the secondary persistent storage, a respective set of persistent metadata. When the first database instance becomes inoperative, the second database instance retrieves, from the secondary persistent storage, persistent metadata corresponding to data stored in a secondary cache of the first database instance.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: June 18, 2019
    Assignee: Oracle International Corporation
    Inventors: Dungara Ram Choudhary, Yu Kin Ho, Norman Lee, Wilson Wai Shun Chan
  • Patent number: 10320414
    Abstract: This application sets forth methods and apparatus to parallelize data decompression. An example method selecting initial starting positions in a compressed data bitstream; adjusting a first one of the initial starting positions to determine a first adjusted starting position by decoding the bitstream starting at a training position in the bitstream, the decoding including traversing the bitstream from the training position as though first data located at the training position is a valid token; outputting first decoded data generated by decoding a first segment of the bitstream starting from the first adjusted starting position; and merging the first decoded data with second decoded data generated by decoding a second segment of the bitstream, the decoding of the second segment starting from a second position in the bitstream and being performed in parallel with the decoding of the first segment, and the second segment preceding the first segment in the bitstream.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: June 11, 2019
    Assignee: Intel Corporation
    Inventors: Vinodh Gopal, James D. Guilford, Sudhir K. Satpathy, Sanu K. Mathew
  • Patent number: 10311228
    Abstract: A data processing system can use a method of fine-grained address space layout randomization to mitigate the system's vulnerability to return oriented programming security exploits. The randomization can occur at the sub-segment level by randomizing clumps of virtual memory pages. The randomized virtual memory can be presented to processes executing on the system. The mapping between memory spaces can be obfuscated using several obfuscation techniques to prevent the reverse engineering of the shuffled virtual memory mapping.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: June 4, 2019
    Assignee: Apple Inc.
    Inventors: Jacques A. Vidrine, Nicholas C. Allegra, Simon P. Cooper, Gregory D. Hughes
  • Patent number: 10282543
    Abstract: Provided are a computer program product, system, and method for determining whether to destage write data in cache to storage based on whether the write data has malicious data. Write data for a storage is cached in a cache. A determination is made as to whether the write data in the cache comprises random data according to a randomness criteria. The write data in the cache to the storage in response to determining that the write data does not comprise random data according to the randomness criteria. The write data is processed as malicious data after determining that the write data comprises random data according to the randomness criteria.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: May 7, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Carol S. Mellgren, John G. Thompson
  • Patent number: 10261915
    Abstract: A processor architecture which partitions on-chip data caches to efficiently cache translation entries alongside data which reduces conflicts between virtual to physical address translation and data accesses. The architecture includes processor cores that include a first level translation lookaside buffer (TLB) and a second level TLB located either internally within each processor core or shared across the processor cores. Furthermore, the architecture includes a second level data cache (e.g., located either internally within each processor core or shared across the processor cores) partitioned to store both data and translation entries. Furthermore, the architecture includes a third level data cache connected to the processor cores, where the third level data cache is partitioned to store both data and translation entries. The third level data cache is shared across the processor cores. The processor architecture can also include a data stack distance profiler and a translation stack distance profiler.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: April 16, 2019
    Assignee: Board of Regents, The University Of Texas System
    Inventors: Lizy K. John, Yashwant Marathe, Jee Ho Ryoo, Nagendra Gulur
  • Patent number: 10235247
    Abstract: A computer program product, system, and method for generating coded fragments comprises receiving a request to generate a memory snapshot for a virtual machine (VM), copying the VM's memory to generate a memory snapshot, obtaining information about cache structures within the memory snapshot, invalidating one or more of the cache structures and zeroing out corresponding cache data within the memory snapshot, and storing the memory snapshot to storage.
    Type: Grant
    Filed: September 26, 2016
    Date of Patent: March 19, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Assaf Natanzon, Philip Derbeko, Moran Zahavy, Maya Bakshi, Anton Pavlinov
  • Patent number: 10229145
    Abstract: A method, a computer system, and/or a computer program product are disclosed. One computer-implemented method for building a hash table includes dividing a hash table into plural blocks; and dividing each block into plural sub-blocks. A certain sub-block uses a first pattern of association between a key and a location for storing the key. Another sub-block which belongs to the same block having the certain sub-block uses a second pattern which is different from the first pattern. The method may further include building a hash table by using memory blocks in a Field Programmable Gate Array.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: March 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Raymond H. Rudy, Takanori Ueda
  • Patent number: 10191849
    Abstract: A cache is sized using an ordered data structure having data elements that represent different target locations of input-output operations (IOs), and are sorted according to an access recency parameter. The cache sizing method includes continually updating the ordered data structure to arrange the data elements in the order of the access recency parameter as new IOs are issued, and setting a size of the cache based on the access recency parameters of the data elements in the ordered data structure. The ordered data structure includes a plurality of ranked ring buffers, each having a pointer that indicates a start position of the ring buffer. The updating of the ordered data structure in response to a new IO includes updating one position in at least one ring buffer and at least one pointer.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: January 29, 2019
    Assignee: VMware, Inc.
    Inventors: Jorge Guerra Delgado, Wenguang Wang
  • Patent number: 10192602
    Abstract: A memory device for storing data is disclosed. The memory device comprises a memory bank comprising a plurality of addressable memory cells configured in a plurality of segments wherein each segment contains N rows per segment, wherein the memory bank comprises a total of B entries, and wherein the memory cells are characterized by having a prescribed word error rate, E. Further, the device comprises a pipeline comprising M pipestages and configured to process write operations of a plurality of data words addressed to a given segment of the memory bank. The device also comprises a cache memory comprising Y number of entries, the cache memory associated with the given segment of the memory bank, and wherein the Y number of entries is based on the M, the N and the prescribed word error rate, E, to prevent overflow of the cache memory.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: January 29, 2019
    Assignee: SPIN TRANSFER TECHNOLOGIES, INC.
    Inventors: Neal Berger, Benjamin Louie, Mourad El-Baraji, Lester Crudele, Daniel Hillman
  • Patent number: 10185496
    Abstract: A reception-side apparatus determines whether or not data duplicating a part of received data from a transmission-side apparatus is stored in the first storage that stores first data which has been received, and notifies, when data duplicating a part of the received data is stored in the first storage, the transmission-side apparatus of prediction information on duplicate reception of the first data. A transmission-side apparatus compares, when the prediction information is received, second data to be transmitted and a part of the first data in a second storage that stores the first data which has been transmitted based on the prediction information, determines whether or not there is a first part of the first data matching the second data in the second storage, and transmits, when there is the first part of the first data in the second storage, outline information on the second data.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: January 22, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Shinichi Sazawa, Hiroaki Kameyama
  • Patent number: 10180901
    Abstract: Aspects of the present disclosure disclose systems and methods for managing space in storage devices. In various aspects, the disclosure is directed to providing more efficient method for managing free space in the storage system, and related apparatus and methods. In particular, the system provides for freeing blocks of memory that are no longer being used based on the information stored in a file system. More specifically, the system allows for reclaiming of large segments of free blocks at one time by providing information on aggregated blocks that were being freed to the storage devices.
    Type: Grant
    Filed: February 18, 2013
    Date of Patent: January 15, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventor: Eric Carl Taylor
  • Patent number: 10169234
    Abstract: A method and computer processor performs a translation lookaside buffer (TLB) purge with concurrent cache updates. Each cache line contains a virtual address field and a data field. A TLB purge process performs operations for invalidating data in the primary cache memory which do not conform to the current state of the translation lookaside buffer. Whenever the TLB purge process and a cache update process perform a write operation to the primary cache memory concurrently, the write operation by the TLB purge process has no effect on the content of the primary cache memory and the cache update process overwrites a data field in a cache line of the primary cache memory but does not overwrite a virtual address field of said cache line. The translation lookaside buffer purge process is subsequently restored to an earlier state and restarted from the earlier state.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Simon H. Friedmann, Markus Kaltenbach, Dietmar Schmunkamp, Johannes C. Reichart
  • Patent number: 10169233
    Abstract: A method and computer processor performs a translation lookaside buffer (TLB) purge with concurrent cache updates. Each cache line contains a virtual address field and a data field. A TLB purge process performs operations for invalidating data in the primary cache memory which do not conform to the current state of the translation lookaside buffer. Whenever the TLB purge process and a cache update process perform a write operation to the primary cache memory concurrently, the write operation by the TLB purge process has no effect on the content of the primary cache memory and the cache update process overwrites a data field in a cache line of the primary cache memory but does not overwrite a virtual address field of said cache line. The translation lookaside buffer purge process is subsequently restored to an earlier state and restarted from the earlier state.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Simon H. Friedmann, Markus Kaltenbach, Dietmar Schmunkamp, Johannes C. Reichart
  • Patent number: 10096065
    Abstract: Various examples are directed to systems and methods for distributed transactions with extended locks. A transaction node may receive from a coordinator node an instruction to execute an assigned operation on an object. The assigned operation may be part of a distributed transaction. The transaction node may obtain a lock associated with the object and execute the assigned operation. The transaction node may also set a time-to-expiration of a lock timer to an initial value and start the lock timer. When the transaction node determines that the lock timer has expired, it may release the lock.
    Type: Grant
    Filed: January 16, 2015
    Date of Patent: October 9, 2018
    Assignee: Red Hat, Inc.
    Inventor: Mark Little
  • Patent number: 9990142
    Abstract: A mass storage system for storing mass data generated by a mass data source. The system includes a data buffer coupled to the mass data source, and a file system and command generator. The data buffer caches the mass data. The file system and command generator generates file system data corresponding to the mass data stored in the data buffer. The file system and command generator also automatically configures a SATA host controller so that the SATA host controller will move the cached mass data and the generated file system data to a mass storage device.
    Type: Grant
    Filed: September 4, 2016
    Date of Patent: June 5, 2018
    Assignee: NXP USA, INC.
    Inventors: Shuwei Wu, Bin Feng, Bin Sai
  • Patent number: 9983821
    Abstract: A method of memory deduplication includes identifying hash tables each corresponding to a hash function, and each including physical buckets, each physical bucket including ways and being configured to store data, identifying virtual buckets each including some physical buckets, and each sharing a physical bucket with another virtual bucket, identifying each of the physical buckets having data stored thereon as being assigned to a single virtual bucket, hashing a data line according to a hash function to produce a hash value, determining whether a corresponding virtual bucket has available space for a block of data according to the hash value, sequentially moving data from the corresponding virtual bucket to an adjacent virtual bucket when the corresponding virtual bucket does not have available space until the corresponding virtual bucket has space for the block of data, and storing the block of data in the corresponding virtual bucket.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: May 29, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Frederic Sala, Chaohong Hu, Hongzhong Zheng, Dimin Niu, Mu-Tien Chang
  • Patent number: 9965607
    Abstract: Embodiments may take the form of devices and methods to help expedite matching biometric data in a validation process. One embodiment, for example, may take the form of a method for biometric validation including receiving a biometric input and retrieving metadata data of a most recently matched template. The method also includes evaluating the metadata and selecting one or more nodes from the most recently matched template for comparison. Additionally, the method includes comparing the selected one or more nodes with the received biometric input and determining if the selected one or more nodes match with the received biometric input. Also, the method includes validating the received biometric input if the selected one or more nodes match with the received biometric input.
    Type: Grant
    Filed: March 12, 2013
    Date of Patent: May 8, 2018
    Assignee: Apple Inc.
    Inventor: Craig A. Marciniak
  • Patent number: 9952981
    Abstract: A method includes reading memory pages from a non-volatile memory that holds at least first memory pages having a first bit significance and second memory pages having a second bit significance, different from the first bit significance. At least some of the read memory pages are cached in a cache memory. One or more of the cached memory pages are selected for eviction from the cache memory, in accordance with a selection criterion that gives eviction preference to the memory pages of the second bit significance over the memory pages of the first bit significance. The selected memory pages are evicted from the cache memory.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: April 24, 2018
    Assignee: APPLE INC.
    Inventors: Alex Radinski, Tsafrir Kamelo
  • Patent number: 9948660
    Abstract: A processor is coupled to a hierarchical memory structure which includes a plurality of levels of cache memories that hierarchically cache data that is read by the processor from a main memory. The processor is integrated within a computer terminal. The processor performs operations that include generating a hierarchical cache latency signature vector by repeating for each of a plurality of buffer sizes, the following: 1) allocating in the main memory a buffer having the buffer size; 2) measuring elapsed time for the processor to read data from buffer addresses that include upper and lower, boundaries of the buffer; and 3) storing the elapsed time and the buffer size as an associated set in the hierarchical cache latency signature vector. The operations further include communicating through a network interface circuit a computer identification message containing computer terminal identification information generated based on the hierarchical cache latency signature vector.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: April 17, 2018
    Assignee: CA, Inc.
    Inventors: Himanshu Ashiya, Atmaram Shetye
  • Patent number: 9928176
    Abstract: A processor applies a transfer policy to a portion of a cache based on access metrics for different test regions of the cache, wherein each test region applies a different transfer policy for data in cache entries that were stored in response to a prefetch requests but were not the subject of demand requests. One test region applies a transfer policy under which unused prefetches are transferred to a higher level cache in a cache hierarchy upon eviction from the test region of the cache. The other test region applies a transfer policy under which unused prefetches are replaced without being transferred to a higher level cache (or are transferred to the higher level cache but stored as invalid data) upon eviction from the test region of the cache.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: March 27, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Paul James Moyer
  • Patent number: 9858185
    Abstract: Improved multi-tier data storage is provided using inclusive/exclusive burst buffer caching techniques based on reference counts. An exemplary multi-tier storage system comprises at least first and second storage tiers for storing data, wherein at least one of the first and second storage tiers comprises at least one cache, and wherein the data is retained in the at least one cache as a given cached data item based on a reference count indicating a number of expected requests for the given cached data item. The number of expected requests for the given cached data item in a given cache is based, for example, on a number of nodes serviced by the given cache. A burst buffer appliance is also provided for implementing the cache retention policies described herein.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: January 2, 2018
    Assignee: EMC Corporation
    Inventors: John M. Bent, Sorin Faibish, James M. Pedone, Jr.
  • Patent number: 9805100
    Abstract: A method includes receiving a first signal and updating a bitmap index responsive to the first signal. The bitmap index includes a plurality of bit strings, where a value stored in a particular location in each of the bit strings indicates whether a corresponding signal associated with a signal source has been received. Updating the bitmap index responsive to the first signal includes updating a first bit of the bitmap index and updating a first metadata value stored in the bitmap index. The method also includes receiving a second signal and updating the bitmap index responsive to the second signal. Updating the bitmap index responsive to the second signal includes updating a second bit of the bitmap index and updating a second metadata value stored in the bitmap index.
    Type: Grant
    Filed: October 3, 2016
    Date of Patent: October 31, 2017
    Assignee: Pilosa Corp.
    Inventors: Travis Turner, Todd Wesley Gruben, Ben Johnson, Cody Stephen Soyland, Higinio O. Maycotte
  • Patent number: 9779031
    Abstract: In one embodiment, a computer-implemented method includes inserting a set of accessed objects into a cache, where the set of accessed objects varies in size. An object includes a set of object components, and responsive to receiving a request to access the object, it is determined that the object does not fit into the cache given the set of accessed objects and a total size of the cache. A heuristic algorithm is applied, by a computer processor, to identify in the set of object components one or more object components for insertion into the cache. The heuristic algorithm considers at least a priority of the object compared to priorities of one or more objects in the set of accessed objects. The one or more object components are inserted into the cache.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: October 3, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Avrilia Floratou, Uday B. Kale, Nimrod Megiddo, Fatma Ozcan, Navneet S. Potti
  • Patent number: 9767033
    Abstract: In the present invention, a base station determines from a communication system whether a first content, which is requested by a mobile terminal, is saved on a cache memory, attributes a predetermined priority ranking to the first content and saves same on the cache memory when the first content is not saved on the cache memory, and updates the priority ranking of the first content on the basis of a predicted popularity of the first content, wherein the predicted popularity is decided on the basis of change in the number of views of a content that corresponds to a category of the first content.
    Type: Grant
    Filed: November 15, 2012
    Date of Patent: September 19, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chul-Ki Lee, Sang-Jun Moon, Yong-Seok Park, Jung-Hwan Lim, Jiangwei Xu
  • Patent number: 9749180
    Abstract: A method and system for autonomously tuning a Lightweight Directory Access Protocol (LDAP) server are disclosed. The method comprises activating a tuning thread when defined conditions are met; and using this thread to initiate automatically a tuning procedure to tune an LDAP server cache, to tune a database buffer pool for the server, and to perform runtime tuning of parameters of the database. Tuning may be initiated upon reaching a specified time, or when the cache hit ratio of the server falls below a given threshold or on issuing the extended operation. The tuning procedure may include Basic or Advanced Tuning procedures and an Advanced Tuning procedure. The Basic Tuning procedure is comprised of static tuning of the server based on the number and size of entries in the database, and the Advanced Tuning Procedure is a real time procedure based on real client search patterns.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: August 29, 2017
    Assignee: International Business Machines Corporation
    Inventors: Chandrajit G. Joshi, Romil J. Shah
  • Patent number: 9727493
    Abstract: Apparatuses and methods for providing data to a configurable storage area are disclosed herein. An example apparatus may include an extended address register including a plurality of configuration bits indicative of an offset and a size, an array having a storage area, a size and offset of the storage area based, at least in part, on the plurality of configuration bits, and a buffer configured to store data, the data including data intended to be stored in the storage area. A memory control unit may be coupled to the buffer and configured to cause the buffer to store the data intended to be stored in the storage area in the storage area of the array responsive, at least in part, to a flush command.
    Type: Grant
    Filed: August 14, 2013
    Date of Patent: August 8, 2017
    Assignee: Micron Technology, Inc.
    Inventors: Graziano Mirichigni, Luca Porzio, Erminio Di Martino, Giacomo Bernardi, Domenico Monteleone, Stefano Zanardi, Chee Weng Tan, Sebastien LeMarie, Andre Klindworth
  • Patent number: 9710174
    Abstract: In semiconductor devices with nonvolatile memory modules embedded therein, a technology is provided which facilitates evaluation of the nonvolatile memory characteristics. An MCU includes a CPU, a flash memory, and an FPCC that controls write or erase operations to the flash memory. The FPCC executes a program used to perform write or other operations to the flash memory, thereby performing write or other operations to the flash memory in accordance with a command issued by the CPU. In the MCU, the FCU is configured to execute test firmware to evaluate the flash memory. In addition, a RAM can be used by both the CPU and FCU.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: July 18, 2017
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Yukiko Take, Shinya Izumi, Tetsuichiro Ichiguchi
  • Patent number: 9652389
    Abstract: A coordinating node maintains globally consistent logical block address (LBA) metadata for a hierarchy of caches, which may be implemented in local and cloud based storage resources. Associated storage endpoints initially determine a hash associated with each access request, but forward the access request to the coordinating node to determine a unique discriminator for each hash.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: May 16, 2017
    Assignee: ClearSky Data
    Inventors: Lazarus Vekiarides, Daniel Suman, Janice Ann Lacy
  • Patent number: 9632945
    Abstract: An amount of sequential fast write (SFW) Tracks are metered by providing an adjustable threshold for performing a destage scan that moves the SFW tracks from a SFW least recently used (LRU) list to a destaging wait list (DWL). Priorities are set for the destaging of the SFW tracks from the DWL.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: April 25, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos
  • Patent number: 9626518
    Abstract: Avoiding encryption in a deduplication vault. In one example embodiment, a method may include analyzing an allocated plain text block stored in the source storage to determine if the block is already stored in the deduplication storage, in response to the block not being stored, encrypting the allocated plain text block and analyzing the encrypted block to determine if the encrypted block is already stored in the deduplication storage, analyzing a second allocated plain text block stored in the source storage to determine if the block is already stored in the deduplication storage, in response to the block already being stored, avoiding encryption of the second allocated plain text block by not encrypting the second allocated plain text block and instead associating the location of the second allocated plain text block in the source storage with the location of the duplicate block already stored.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: April 18, 2017
    Assignee: STORAGECRAFT TECHNOLOGY CORPORATION
    Inventor: Andrew Lynn Gardner
  • Patent number: 9563590
    Abstract: A system having an arbitrated interface bus and a method of operating the same are provided. The system may include, but is not limited to, one or more registers configured to store data, a plurality of external interfaces configured to receive data access requests for the register(s), an arbitrator communicatively coupled to each of the plurality of external interfaces, and an interface bus communicatively coupled between the arbitrator and the register(s), wherein the arbitrator is configured to arbitrate control of the interface bus between the plurality of external interfaces.
    Type: Grant
    Filed: March 17, 2014
    Date of Patent: February 7, 2017
    Assignee: NXP USA, INC.
    Inventors: Joseph S. Vaccaro, Michael P. Collins
  • Patent number: 9563575
    Abstract: A mechanism for evicting a cache line from a cache memory includes first selecting for eviction a least recently used cache line of a group of invalid cache lines. If all cache lines are valid, selecting for eviction a least recently used cache line of a group of cache lines in which no cache line of the group of cache lines is also stored within a higher level cache memory such as the L1 cache, for example. Lastly, if all cache lines are valid and there are no non-inclusive cache lines, selecting for eviction the least recently used cache line stored in the cache memory.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: February 7, 2017
    Assignee: Apple Inc.
    Inventors: Brian P. Lilly, Gerard R. Williams, III, Mahnaz Sadoughi-Yarandi, Perumal R. Subramonium, Hari S. Kannan, Prashant Jain
  • Patent number: 9563565
    Abstract: Apparatuses and methods for providing data from a buffer are disclosed herein. An example apparatus may include an array, a buffer, and a memory control unit. The buffer may be coupled to the array and configured to store data. The data may include data intended to be stored in the storage area. The memory control unit may be coupled to the array and the buffer. The memory control unit may be configured to cause the buffer to store the data responsive, at least in part, to a write command and may further be configured to cause the buffer to store the data intended to be stored in the storage area in the storage area of the array responsive, at least in part, to a flush command.
    Type: Grant
    Filed: August 14, 2013
    Date of Patent: February 7, 2017
    Assignee: Micron Technology, Inc.
    Inventors: Graziano Mirichigni, Luca Porzio, Erminio Di Martino, Giacomo Bernardi, Domenico Monteleone, Stefano Zanardi
  • Patent number: 9552301
    Abstract: A cache includes a cache array and a cache controller. The cache array has a plurality of entries. The cache controller is coupled to the cache array. The cache controller evicts entries from the cache array according to a cache replacement policy. The cache controller evicts a first cache line from the cache array by generating a writeback request for modified data from the first cache line, and subsequently generates a writeback request for modified data from a second cache line if the second cache line is about to satisfy the cache replacement policy and stores data from a common locality as the first cache line.
    Type: Grant
    Filed: July 15, 2013
    Date of Patent: January 24, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Zhe Wang, Junli Gu, Yi Xu
  • Patent number: 9535837
    Abstract: A first cache is provided to cache a first portion of a first block of digital content received over a network connection shared between a first user associated with the first cache and at least one second user. The first cache caches the first portion in response to the first user or the second user(s) requesting the first block. The first cache selects the first portion based on a fullness of the first cache, a number of blocks cached in the first cache, or a cache eviction rule associated with the first cache.
    Type: Grant
    Filed: November 19, 2013
    Date of Patent: January 3, 2017
    Assignee: Alcatel-Lucent USA Inc.
    Inventors: Mohammadali Maddah-Ali, Urs Niesen, Ramtin Pedarsani
  • Patent number: 9535771
    Abstract: A method and an apparatus for determining a usage level of a memory device to notify a running application to perform memory reduction operations selected based on the memory usage level are described. An application calls APIs (Application Programming Interface) integrated with the application codes in the system to perform memory reduction operations. A memory usage level is determined according to a memory usage status received from the kernel of a system. A running application is associated with application priorities ranking multiple running applications statically or dynamically. Selecting memory reduction operations and notifying a running application are based on application priorities. Alternatively, a running application may determine a mode of operation to directly reduce memory usage in response to a notification for reducing memory usage without using API calls to other software.
    Type: Grant
    Filed: July 23, 2013
    Date of Patent: January 3, 2017
    Assignee: Apple Inc.
    Inventors: Matthew G. Watson, James Michael Magee
  • Patent number: 9519590
    Abstract: A method is used in managing global caches in data storage systems. A cache entry of a global cache of a data storage system is accessed upon receiving a request to perform an I/O operation on a storage object. The cache entry is associated with the storage object. Accessing the cache entry includes holding a reference to the cache entry. A determination is made as to whether the I/O operation is associated with a sequential access. Based on the determination, releasing the reference to the cache entry is delayed.
    Type: Grant
    Filed: June 26, 2012
    Date of Patent: December 13, 2016
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Philippe Armangau, Christopher Seibel
  • Patent number: 9489149
    Abstract: Methods and systems for storing data at a storage device of a storage system are provided. The data is first temporarily stored at a first write cache and an input/output request for a persistence storage device used as a second write cache is generated, when an I/O request size including the received data has reached a threshold value. The data from the first cache is transferred to the persistence storage device and a recovery control block with a location of the data stored at the persistence storage device is updated. An entry is added to a linked list that is used to track valid data stored at the persistence storage device and then the data is transferred from the persistence storage device to the storage device of the storage system.
    Type: Grant
    Filed: June 16, 2014
    Date of Patent: November 8, 2016
    Assignee: NETAPP, INC.
    Inventors: William Patrick Delaney, Joseph Russell Blount, Rodney A. DeKoning
  • Patent number: 9471505
    Abstract: A method for reclaiming space in a journal is disclosed. In one embodiment, such a method includes identifying a plurality of ranks in a storage system. The method creates a destage wait list for each rank, where the destage wait list identifies metadata tracks to destage from a cache to the corresponding rank. The method dispatches one or more threads for each destage wait list. The threads destage metadata tracks identified in the destage wait lists from the cache to the corresponding ranks. In certain embodiments, the method moves metadata tracks to the destage wait lists only if performing such will not cause occupied space in the journal to fall below a low watermark. Once metadata tracks are destaged from the cache, the method releases, from the journal, entries associated with the destaged metadata tracks. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 7, 2015
    Date of Patent: October 18, 2016
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Lokesh M. Gupta, Carol S. Mellgren, Alfred E. Sanchez
  • Patent number: 9471493
    Abstract: A data processing apparatus and corresponding method of data processing are provided. The data processing apparatus comprises a temporary data store configured to store data items retrieved from a memory, wherein the temporary data store selects one of its plural data storage locations in which to store a newly retrieved data item according to a predetermined circular sequence. An index data store is configured to store index items corresponding to the data items stored in the temporary data store, wherein presence of a valid index item in the index data store is indicative of a corresponding data item in the temporary data store. Invalidation control circuitry performs a rolling invalidation process with respect to the index items stored in the index data store, comprising sequentially processing the index items stored in the index data store and selectively marking the index items as invalid according to a predetermined criterion.
    Type: Grant
    Filed: December 2, 2014
    Date of Patent: October 18, 2016
    Assignee: ARM Limited
    Inventors: Erik Persson, Ola Hugosson
  • Patent number: 9465737
    Abstract: A memory system includes a cache module configured to store data. A duplicate removing filter module is separate from the cache module. The duplicate removing filter module is configured to receive read requests and write requests for data blocks to be read from or written to the cache module, selectively generate fingerprints for the data blocks associated with the write requests, selectively store at least one of the fingerprints as stored fingerprints and compare a fingerprint of a write request to the stored fingerprints.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: October 11, 2016
    Assignee: Toshiba Corporation
    Inventors: Sandeep Karmarkar, Paresh Phadke
  • Patent number: 9424202
    Abstract: A database cache manager for controlling a composition of a plurality of cache entries in a data cache is described. Each cache entry is a result of a query carried out on a database of data records, the cache manager being arranged to remove cache entries from the cache based on a cost of removal factor which is comprised of a time cost, the time cost being calculated from the amount of time taken to obtain a query result to which that cache entry is related.
    Type: Grant
    Filed: November 19, 2012
    Date of Patent: August 23, 2016
    Assignee: Smartfocus Holdings Limited
    Inventor: Charles Wells
  • Patent number: 9405695
    Abstract: A system and method for determining an optimal cache size of a computing system is provided. In some embodiments, the method comprises selecting a portion of an address space of a memory structure of the computing system. A workload of data transactions is monitored to identify a transaction of the workload directed to the portion of the address space. An effect of the transaction on a cache of the computing system is determined, and, based on the determined effect of the transaction, an optimal cache size satisfying a performance target is determined. In one such embodiment the determining of the effect of the transaction on a cache of the computing system includes determining whether the effect would include a cache hit for a first cache size and determining whether the effect would include a cache hit for a second cache size different from the first cache size.
    Type: Grant
    Filed: November 5, 2013
    Date of Patent: August 2, 2016
    Assignee: NETAPP, INC.
    Inventors: Koling Chang, Ravikanth Dronamraju, Mark Smith, Naresh Patel
  • Patent number: 9384138
    Abstract: A data storage system with a cache organizes cache windows into lists based on the number of cache lines accessed during input/output operations. The lists are maintained in temporal queues with cache windows transferred from prior temporal queues to a current temporal queue. Cache windows are removed from the oldest temporal queue and least accessed cache window list whenever cached data needs to be removed for new hot data.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: July 5, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Vinay Bangalore Shivashankaraiah, Kumaravel Thillai
  • Patent number: 9378769
    Abstract: Systems and methods for storing and retrieving data on a magnetic tape accessed by a tape drive having an associated tape drive processor in communication with a host computer having an associated host processor include writing data to at least one partition within a logical volume having an associated number of sections designated by the host computer from a predetermined number of sections associated with the magnetic tape, wherein each partition extends across one section.
    Type: Grant
    Filed: January 31, 2011
    Date of Patent: June 28, 2016
    Assignee: Oracle International Corporation
    Inventors: David G. Hostetter, Ryan P. McCallister
  • Patent number: 9367474
    Abstract: Systems and methods for translating cache hints between different protocols within a SoC. A requesting agent within the SoC generates a first cache hint for a transaction, and the first cache hint is compliant with a first protocol. The first cache hint can be set to a reserved encoding value as defined by the first protocol. Prior to the transaction being sent to the memory subsystem, the first cache hint is translated into a second cache hint. The memory subsystem recognizes cache hints which are compliant with a second protocol, and the second cache hint is compliant with the second protocol.
    Type: Grant
    Filed: June 12, 2013
    Date of Patent: June 14, 2016
    Assignee: Apple Inc.
    Inventors: Shailendra S. Desai, Gurjeet S. Saund, Deniz Balkan, James Wang
  • Patent number: 9361241
    Abstract: Tracks are selected for destaging from a least recently used (LRU) list and the selected tracks are moved to a destaging wait list. The selected tracks are grouped and destaged from the destaging wait list.
    Type: Grant
    Filed: April 3, 2013
    Date of Patent: June 7, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Brian A. Rinaldi
  • Patent number: 9354989
    Abstract: Region based admission and eviction control can be used for managing resources (e.g., caching resources) shared by competing workloads with different SLOs in hybrid aggregates. A “region” or “phase” refers to different incoming loads of a workload (e.g., different working set sizes, different intensities of the workload, etc.). These regions can be identified and then utilized along with other factors (e.g., incoming loads of other workloads, maximum cache allocation size, service level objectives, and others factors/parameters) in managing cache storage resources.
    Type: Grant
    Filed: October 3, 2011
    Date of Patent: May 31, 2016
    Assignee: NETAPP, INC
    Inventors: Priya Sehgal, Kaladhar Voruganti, Rajesh Sundaram