Combined Replacement Modes Patents (Class 711/134)
  • Patent number: 11977738
    Abstract: There is provided an apparatus, method and medium. The apparatus comprises a store buffer to store a plurality of store requests, where each of the plurality of store requests identifies a storage address and a data item to be transferred to storage beginning at the storage address, where the data item comprises a predetermined number of bytes. The apparatus is responsive to a memory access instruction indicating a store operation specifying storage of N data items, to determine an address allocation order of N consecutive store requests based on a copy direction hint indicative of whether the memory access instruction is one of a sequence of memory access instructions each identifying one of a sequence of sequentially decreasing addresses, and to allocate the N consecutive store requests to the store buffer in the address allocation order.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: May 7, 2024
    Assignee: Arm Limited
    Inventors: Abhishek Raja, Yasuo Ishii
  • Patent number: 11893283
    Abstract: Apparatuses and methods can be related to generating an asynchronous process topology in a memory device. The topology can be generated based on the results of a number of processes. The processes can be asynchronous given that the processing resources that implement the processes do not use a clock signal to generate the topology.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: February 6, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Glen E. Hush, Richard C. Murphy, Honglin Sun
  • Patent number: 11868692
    Abstract: Address generators for use in verifying an integrated circuit hardware design for an n-way set associative cache. The address generator is configured to generate, from a reverse hashing algorithm matching the hashing algorithm used by the n-way set associative cache, a list of cache set addresses that comprises one or more addresses of the main memory corresponding to each of one or more target sets of the n-way set associative cache. The address generator receives requests for addresses of main memory from a driver to be used to generate stimuli for testing an instantiation of the integrated circuit hardware design for the n-way set associative cache. In response to receiving a request the address generator provides an address from the list of cache set addresses.
    Type: Grant
    Filed: April 2, 2021
    Date of Patent: January 9, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Anthony Wood, Philip Chambers
  • Patent number: 11868221
    Abstract: Techniques for performing cache operations are provided. The techniques include tracking performance events for a plurality of test sets of a cache, detecting a replacement policy change trigger event associated with a test set of the plurality of test sets, and in response to the replacement policy change trigger event, operating non-test sets of the cache according to a replacement policy associated with the test set.
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: January 9, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: John Kelley, Vanchinathan Venkataramani, Paul J. Moyer
  • Patent number: 11868264
    Abstract: One embodiment provides circuitry coupled with cache memory and a memory interface, the circuitry to compress compute data at multiple cache line granularity, and a processing resource coupled with the memory interface and the cache memory. The processing resource is configured to perform a general-purpose compute operation on compute data associated with multiple cache lines of the cache memory. The circuitry is configured to compress the compute data before a write of the compute data via the memory interface to the memory bus, in association with a read of the compute data associated with the multiple cache lines via the memory interface, decompress the compute data, and provide the decompressed compute data to the processing resource.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: January 9, 2024
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Patent number: 11841798
    Abstract: Circuitry comprises processing circuitry to access a hierarchy of at least two levels of cache memory storage; memory circuitry comprising plural storage elements, at least some of the storage elements being selectively operable as cache memory storage in respective different cache functions; and control circuitry to allocate storage elements of the memory circuitry for operation according to a given cache function.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: December 12, 2023
    Assignee: Arm Limited
    Inventor: Daren Croxford
  • Patent number: 11841796
    Abstract: Methods, systems, and devices for scratchpad memory in a cache are described. A device may operate a portion of a volatile memory in a cache mode having non-deterministic latency for satisfying requests from a host device. The device may monitor a register with an output pin that is associated with the portion and indicative of an operating mode of the portion. Based on or in response to monitoring the output pin, the device may determine whether to change the operating mode of the portion from the cache mode to a scratchpad mode having deterministic latency for satisfying requests from the host device.
    Type: Grant
    Filed: January 5, 2022
    Date of Patent: December 12, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Chinnakrishnan Ballapuram, Saira Samar Malik, Taeksang Song
  • Patent number: 11831557
    Abstract: A system and method for soft locking for a networking device in a network, such as a network-on-chip (NoC). Once a soft lock is established, the port and packet are given transmitting priority so long has the port has an available packet or packet parts that can make forward progress in the network. When the soft lock port's packet parts are not available, the networking device may choose another port and/or another packet. Any arbitration scheme may be used. Once the packet (or all the packet parts) has completed transmission, the soft lock is released.
    Type: Grant
    Filed: June 8, 2022
    Date of Patent: November 28, 2023
    Assignee: ARTERIS, INC.
    Inventors: John Coddington, Benoit de Lescure, Syed Ijlal Ali Shah, Sanjay Despande
  • Patent number: 11768772
    Abstract: In some examples, a system includes a processing entity and a memory to store data arranged in a plurality of bins associated with respective key values of a key. The system includes a cache to store cached data elements for respective accumulators that are updatable to represent occurrences of the respective key values of the key, where each accumulator corresponds to a different bin of the plurality of bins, and each cached data element has a range that is less than a range of a corresponding bin of the plurality of bins. Responsive to a value of a given cached data element as updated by a given accumulator satisfying a criterion, the processing entity is to cause an aggregation of the value of the given cached data element with a bin value in a respective bin.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: September 26, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ryan D. Menhusen, Darel Neal Emmot
  • Patent number: 11726699
    Abstract: One embodiment provides a system which facilitates data management. The system receives, by a storage device via read requests from multiple streams, a first plurality of logical block addresses (LBAs) and corresponding stream identifiers. The system assigns a respective LBA to a first queue of a plurality of queues based on the stream identifier corresponding to the LBA. Responsive to determining that a second plurality of LBAs in the first queue are of a sequentially similar pattern: the system retrieves, from a non-volatile memory of the storage device, data associated with the second plurality of LBAs; and the system stores the retrieved data and the second plurality of LBAs in a volatile memory of the storage device while bypassing data-processing operations.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: August 15, 2023
    Assignee: ALIBABA SINGAPORE HOLDING PRIVATE LIMITED
    Inventor: Shu Li
  • Patent number: 11726920
    Abstract: A device includes a cache memory and a memory controller coupled to the cache memory. The memory controller is configured to receive a first read request from a cache controller over an interconnect, the first read request comprising first tag data identifying a first cache line in the cache memory, and determine that the first read request comprises a tag read request. The memory controller is further configured to read second tag data corresponding to the tag read request from the cache memory, compare the second tag data read from the cache memory to the first tag data received from the cache controller with the first read request, and if the second tag data matches the first tag data, initiate an action with respect to the first cache line in the cache memory.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: August 15, 2023
    Assignee: Rambus Inc.
    Inventors: Michael Miller, Dennis Doidge, Collins Williams
  • Patent number: 11715541
    Abstract: A method includes associating each block of a plurality of blocks of a memory device with a corresponding frequency access group of a plurality of frequency access groups based on corresponding access frequencies, and performing scan operations on blocks of each of the plurality of frequency access groups using a scan frequency that is different from scan frequencies of other frequency access groups. A scan operation performed on a frequency access group with a higher access frequency uses a higher scan frequency than a scan operation performed on a frequency access group with a lower access frequency.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: August 1, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Renato C. Padilla, Sampath K. Ratnam, Christopher M. Smitchger, Vamsi Pavan Rayaprolu, Gary F. Besinga, Michael G. Miller, Tawalin Opastrakoon
  • Patent number: 11611348
    Abstract: Techniques are provided for implementing a file system format for persistent memory. A node, with persistent memory, receives an operation associated with a file identifier and file system instance information. A list of file system info objects are evaluated to identify a file system info object matching the file system instance information. An inofile, identified by the file system info object as being associated with inodes of files within an instance of the file system targeted by the operation, is traversed to identify an inode matching the file identifier. If the inode has an indicator that the file is tiered into the persistent memory, then the inode it utilized to facilitate execution of the operation upon the persistent memory. Otherwise, the operation is routed to a storage file system tier for execution by a storage file system upon storage associated with the node.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: March 21, 2023
    Assignee: NetApp, Inc.
    Inventors: Ram Kesavan, Matthew Fontaine Curtis-Maury, Abdul Basit, Vinay Devadas, Ananthan Subramanian, Mark Smith
  • Patent number: 11593269
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: February 28, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Patent number: 11586548
    Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: February 21, 2023
    Assignee: Intel Corporation
    Inventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
  • Patent number: 11544093
    Abstract: Examples herein relate to checkpoint replication and copying of updated checkpoint data. For example, a memory controller coupled to a memory can receive a write request with an associated address to write or update checkpoint data and track updates to checkpoint data based on at least two levels of memory region sizes. A first level is associated with a larger memory region size than a memory region size associated with the second level. In some examples, the first level is a cache-line memory region size and the second level is a page memory region size. Updates to the checkpoint data can be tracked at the second level unless an update was previously tracked at the first level. Reduced amounts of updated checkpoint data can be transmitted during a checkpoint replication by using multiple region size trackers.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 3, 2023
    Assignee: Intel Corporation
    Inventors: Zhe Wang, Andrew V. Anderson, Alaa R. Alameldeen, Andrew M. Rudoff
  • Patent number: 11513996
    Abstract: An index associates fingerprints of file segments to container numbers of containers within which the file segments are stored. At a start of migration, a boundary is created identifying a current container number. At least a subset of file segments at a source storage tier are packed into a new container to be written to a destination storage tier. A new container number is generated for the new container. The index is updated to associate fingerprints of the at least subset of file segments to the new container number. A request is received to read a file segment. The index is queried with a fingerprint of the file segment to determine whether the request should be directed to the source or destination storage tier based on a container number of a container within which the file segment is stored.
    Type: Grant
    Filed: July 14, 2021
    Date of Patent: November 29, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Neeraj Bhutani, Ramprasad Chinthekindi, Nitin Madan, Srikanth Srinivasan
  • Patent number: 11481143
    Abstract: Metadata of extent-based storage systems can be managed. For example, a computing device can store a first metadata object and a second metadata object in a first memory device. The first metadata object can specify locations of a first set of extents corresponding to a first data unit stored in a second memory device. The second metadata object can specify locations of a second set of extents corresponding to a second data unit stored in the second memory device. The computing device can determine that a first size of the first metadata object is smaller than a second size of the second metadata object. The computing device can remove the second metadata object from the first memory device based on determining that the first size is less than the second size.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: October 25, 2022
    Assignee: RED HAT, INC.
    Inventors: Gabriel Zvi BenHanokh, Joshua Durgin
  • Patent number: 11474738
    Abstract: Exemplary methods, apparatuses, and systems include receiving a plurality of read operations directed to a portion of memory accessed by a memory channel. The plurality of read operations are divided into a current set of a sequence of read operations and one or more other sets of sequences of read operations. An aggressor read operation is selected from the current set. A supplemental memory location is selected independently of aggressors and victims in the current set of read operations. A first data integrity scan is performed on a victim of the aggressor read operation and a second data integrity scan is performed on the supplemental memory location.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: October 18, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Saeed Sharifi Tehrani, Ashutosh Malshe, Kishore Kumar Muchherla, Sivagnanam Parthasarathy, Vamsi Pavan Rayaprolu
  • Patent number: 11455122
    Abstract: Provided is a storage system in which a compression rate of randomly written data can be increased and access performance can be improved. A storage controller 22A includes a cache area 203A configured to store data to be read out of or written into a drive 29. The controller 22A groups a plurality of pieces of data stored in the cache area 203A and input into the drive 29 based on a similarity degree among the pieces of data, selects a group, compresses data of the selected group in group units, and stores the compressed data in the drive 29.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: September 27, 2022
    Assignee: HITACHI, LTD.
    Inventors: Nagamasa Mizushima, Tomohiro Yoshihara, Kentaro Shimada
  • Patent number: 11405358
    Abstract: The application includes a data processing device and method. In an embodiment, the data processing device includes a data collection unit, configured to collect data transmitted in a network, and divide the collected data, according to a predetermined feature, into known attack data and unknown attack data. The data processing device further includes a data conversion unit, configured to replace, according to a mapping database, at least a portion of the content included in the unknown attack data with corresponding identification codes. Therefore, the size of data transmitted in the network can be reduced.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: August 2, 2022
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Dai Fei Guo, Xi Feng Liu
  • Patent number: 11372779
    Abstract: A memory page management method is provided. The method includes receiving a state-change notification corresponding to a state-change page, and grouping the state-change page from a list to which the state-change page belongs into a keep list or an adaptive LRU list of an adaptive adjusting list according to the state-change notification; receiving an access command from a CPU to perform an access operation to target page data corresponding to a target page; determining that a cache hit state is a hit state or a miss state according to a target NVM page address corresponding to the target page, and grouping the target page into the adaptive LRU list according to the cache hit state; and searching the adaptive page list according to the target NVM page address to obtain a target DRAM page address to complete the access command corresponding to the target page data.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: June 28, 2022
    Assignees: Industrial Technology Research Institute, National Taiwan University
    Inventors: Che-Wei Tsao, Tei-Wei Kuo, Yuan-Hao Chang, Tzu-Chieh Shen, Shau-Yin Tseng
  • Patent number: 11327843
    Abstract: Provided are an apparatus and method for managing data storage. A first log structured array stores data in a storage device. A second log structured array in the storage device stores metadata for the data in the first log structured array, wherein the second log structured array storing the metadata for the first log structured data storage system is nested within the first log structured array, and wherein the first and second log structured arrays comprise separate instances of log structured arrays. Address space is allocated in the second log structured array for metadata when the allocation of address space is required for metadata for data stored in the first log structured array.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: May 10, 2022
    Assignee: International Business Machines Corporation
    Inventors: Henry Esmond Butterworth, Ian David Judd
  • Patent number: 11301395
    Abstract: A method for characterizing workload sequentiality for cache policy optimization includes maintaining an IO trace data structure having a rolling window of IO traces describing access operations on addresses of a storage volume. A page count data structure is maintained that includes a list of all of the addresses of the storage volume referenced by the IO traces in the IO trace data structure. A list of sequences data structure is maintained that contains a list of all sequences of the addresses of the storage volume that were accessed by the IO traces in the IO trace data structure. A sequence lengths data structure is used to correlate each sequence in the list of sequences data structure with a length of the sequence, and a histogram data structure is used to correlate sequence lengths and a number of how many of sequences of each length are maintained in the sequence lengths data structure.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: April 12, 2022
    Assignee: Dell Products, L.P.
    Inventors: Hugo de Oliveira Barbalho, Vinícius Michel Gottin, Rômulo Teixeira de Abreu Pinho
  • Patent number: 11294829
    Abstract: A method configures a cache to implement a LRU management technique. The cache has N entries divided into B buckets. Each bucket has a number of entries equal to P entries*M vectors, wherein N=B*P*M. Any P entry within any M vector is ordered using an in-vector LRU ordering process. Any entry within any bucket is ordered in LRU within the vectors and buckets. The LRU management technique moves a found entry to a first position within a same M vector, responsive to a lookup for a specified key, and permutes the found entry and a last entry in a previous M vector, responsive to the found entry already being in the first position within a vector and the same one of the M vectors not being a first vector in the bucket in the moving step.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: April 5, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Hiroshi Inoue
  • Patent number: 11287274
    Abstract: Technologies are provided for memory management for route optimization algorithms. An example method can include determining a cost surface of a route-based project associated with an area, the cost surface including nodes comprising costs associated with respective locations within the area; determining whether a cache has data of each neighbor of a current node being processed to determine a least-cost path from a start node to an end node; obtaining, from the memory cache, the data of each neighbor; for each particular neighbor that is not a boundary node in the cost surface, determining a projected cost of the particular neighbor based on an accumulated cost of the particular neighbor and an additional cost estimated based on a distance between the particular neighbor and the end node; and based on the projected cost of each particular neighbor, determining the least-cost path from the start node to the end node.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: March 29, 2022
    Assignee: IMERCATUS HOLDINGS LLC
    Inventors: Jose Mejia Robles, Scott L. Gowdish
  • Patent number: 11221957
    Abstract: A method, computer program product, and a computer system are disclosed for processing information in a processor that in one or more embodiments includes receiving a request for an Effective Address to Real Address Translation (ERAT); determining whether there is a permissions miss; changing, in response to determining there is a permission miss, permissions of an ERAT cache entry; and providing a Real Address translation. The method, computer program product, and computer system may optionally include providing a promote checkout request to a memory management unit (MMU).
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: January 11, 2022
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Benjamin Herrenschmidt, Robert D. Herzl, Jody Joyner, Jon K. Kriegel, Charles D. Wait
  • Patent number: 11169919
    Abstract: A method for improving cache hit ratios for selected volumes within a storage system includes monitoring I/O to multiple volumes residing on a storage system. The method determines, from the I/O, which particular volumes of the multiple volumes would benefit the most if provided favored status in cache of the storage system, where the favored status provides increased residency time in the cache compared to volumes not having the favored status. The method determines, from the I/O, an amount by which the increased residency time should exceed a residency time of volumes not having the favored status. The method generates an indicator that is representative of the amount and transmits this indicator to the storage system. The storage system, in turn, provides increased residency time to the particular volumes in accordance with the favored status and indicator. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: May 12, 2019
    Date of Patent: November 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Beth A. Peterson, Kyler A. Anderson, Kevin J. Ash
  • Patent number: 10970218
    Abstract: The present disclosure includes apparatuses and methods for compute enabled cache. An example apparatus comprises a compute component, a memory and a controller coupled to the memory. The controller configured to operate on a block select and a subrow select as metadata to a cache line to control placement of the cache line in the memory to allow for a compute enabled cache.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: April 6, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Richard C. Murphy
  • Patent number: 10949360
    Abstract: The information processing apparatus is provided with a plurality of arithmetic devices, a memory unit shared by the plurality of arithmetic devices, and a cache device. The cache device divides the memory space of the memory unit into a plurality of regions, and includes a plurality of caches in the same hierarchy, each of which is associated with a respective one of the plurality of regions. Each cache includes a cache core configured to exclusively store data from a respective one of the plurality of regions.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: March 16, 2021
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Seidai Takeda
  • Patent number: 10942866
    Abstract: Disclosed are systems and methods for using a priority cache to store frequently used data items by an application. The priority cache may include multiple cache regions. Each of the cache regions may be associated with a different priority level. When a data item is to be stored in the priority cache, the application may review the context of the data item to determine if the data item may be used again in the near future. Based on that determination, the application may be configured to assign a priority level to the data item. The data item may then be stored in the appropriate cache region according to its assigned priority level.
    Type: Grant
    Filed: March 21, 2014
    Date of Patent: March 9, 2021
    Assignee: EMC IP Holding Company LLC
    Inventor: Dennis Holmes
  • Patent number: 10915444
    Abstract: A processing device in a memory system determines whether a first data block of a plurality of data blocks on the memory component satisfies a first threshold criterion pertaining to a first number of the plurality of data blocks having a lower amount of valid data than a remainder of the plurality of data blocks. Responsive to the first data block satisfying the first threshold criterion, the processing device determines whether the first data block satisfies a second threshold criterion pertaining to a second number of the plurality of data blocks having been written to more recently than the remainder of the plurality of data blocks. Responsive to the first data block satisfying the second threshold criterion, the processing device determines whether a rate of change of an amount of valid data on the first data block satisfies a third threshold criterion.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: February 9, 2021
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Kishore Kumar Muchherla, Sampath K. Ratnam, Ashutosh Malshe, Peter Sean Feeley
  • Patent number: 10901915
    Abstract: Systems, apparatuses, and methods may provide for an eventually-consistent distributed caching mechanism for database systems. As an example, the system may include a recently updated objects (RUO) manager, which may store object identifiers of recently updated objects and RUO time-to-live values of the object identifiers. As servers read objects from the cache or write objects into the cache, the servers may also check the RUO manager to determine if the object has been updated recently enough to be at risk of being stale or outdated. If so, the servers may invalidate the object stored at the cache as it may be stale, which results in eventual consistency across the distributed database system.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: January 26, 2021
    Assignee: Comcast Cable Communications, LLC
    Inventors: Christopher Orogvany, Mark Perry, Bradley W. Jacobs
  • Patent number: 10901906
    Abstract: This disclosure provides a method, a computing system and a computer program product for allocating write data in a storage system. The storage system comprises a Non-Volatile Write Cache (NVWC) and a backend storage subsystem, and the write data comprises first data whose addresses are not in the NVWC. The method includes checking fullness of the NVWC, and determining at least one of a write-back mechanism or a write-through mechanism as a write mode for the first data based on the checked fullness.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Gang Lyu, Hui Zhang
  • Patent number: 10860257
    Abstract: An information processing apparatus includes a RAM; a non-volatile memory storing setting information in which a compression method is set for each of a plurality of RAM disks, the setting information including a plurality of compression methods; and circuitry. The circuitry is configured to create each of the plurality of RAM disks with the compression method mounted, in the RAM, according to the setting information; request writing and reading of data from an application; write the data into a corresponding RAM disk of the plurality of RAM disks corresponding to the application, in response to a writing request of the data from the application; and compress the data in the compression method mounted on the corresponding RAM disk.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: December 8, 2020
    Assignee: Ricoh Company, Ltd.
    Inventors: Daiki Sakurada, Hideaki Yamamoto, Hiroyuki Ishihara, Tomoe Kitaguchi, Ryuta Aoki
  • Patent number: 10831661
    Abstract: Processing simultaneous data requests regardless of active request in the same addressable index of a cache. In response to the cache miss in the given congruence, if the number of other compartments in the given congruence class that have an active operation is less than a predetermined threshold, setting a Do Not Cast Out (DNCO) pending indication for each of the compartments that have an active operation in order to block access to each of the other compartments that have active operations and, if the number of other compartments in the given congruence class that have an active operation is not less than a predetermined threshold, blocking another cache miss from occurring in the compartments of the given congruence class by setting a congruence class block pending indication for the given congruence class in order to block access to each of the other compartments of the given congruence class.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Tim Bronson, Robert J. Sonnelitter, III, Deanna P. D. Berger, Chad G. Wilson, Kenneth Douglas Klapproth, Arthur O'Neill, Michael A. Blake, Guy G. Tracy
  • Patent number: 10713081
    Abstract: Secure and efficient memory sharing for guests is disclosed. For example, a host has a host memory storing first and second guests whose memory access is managed by a hypervisor. A request to map an IOVA of the first guest to the second guest is received, where the IOVA is mapped to a GPA of the first guest, which is is mapped to an HPA of the host memory. The HPA is mapped to a second GPA of the second guest, where the hypervisor controls access permissions of the HPA. The second GPA is mapped in a second page table of the second guest to a GVA of the second guest, where a supervisor of the second guest controls access permissions of the second GPA. The hypervisor enables a program executing on the second guest to access contents of the HPA based on the access permissions of the HPA.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: July 14, 2020
    Assignee: RED HAT, INC.
    Inventors: Michael Tsirkin, Stefan Hajnoczi
  • Patent number: 10691598
    Abstract: A method for managing cache memory in a user device is disclosed. The method comprises stagewise excluding adding data to the cache memory as cache memory fill level increases, the stagewise excluding comprising determining, for each successive stage of cache memory fill level, exclusion of the data from being added to the cache memory according to rules of exclusion of adding of data to the cache memory that are increasingly restrictive.
    Type: Grant
    Filed: May 16, 2011
    Date of Patent: June 23, 2020
    Assignee: INTERDIGITAL CE PATENT HOLDINGS
    Inventors: Thierry Quere, Renaud Rigal, Florent Fresnaye
  • Patent number: 10601919
    Abstract: In accordance with one aspect of the present description, in response to a detection by a storage controller, of an operation by a host relating to migration of input/output operations from one host to another, a cache server of a storage controller, transmits to a target cache client of the target host, a cache map of the source cache of the source host wherein the cache map identifies locations of a portion of the storage cached in the source cache. In response, the cache client of the target host, may populate the target cache of the target host with data from the locations of the portion of the storage, as identified by the cache map transmitted by the cache server, which may reduce cache warming time. Other features or advantages may be realized in addition to or instead of those described herein, depending upon the particular application.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: March 24, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lawrence Y. Chiu, Hyojun Kim, Paul H. Muench, Sangeetha Seshadri
  • Patent number: 10573368
    Abstract: In an embodiment, a memory system may include at least two types of DRAM, which differ in at least one characteristic. For example, one DRAM type may be a high density DRAM, while another DRAM type may have lower density but may also have lower latency and higher bandwidth than the first DRAM type. DRAM of the first type may be on one or more first integrated circuits and DRAM of the second type may be on one or more second integrated circuits. In an embodiment, the first and second integrated circuits may be coupled together in a stack. The second integrated circuit may include a physical layer circuit to couple to other circuitry (e.g. an integrated circuit having a memory controller, such as a system on a chip (SOC)), and the physical layer circuit may be shared by the DRAM in the first integrated circuits.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: February 25, 2020
    Assignee: Apple Inc.
    Inventors: Sukalpa Biswas, Farid Nemati
  • Patent number: 10567277
    Abstract: A method and a system is disclosed herein for co-operative on-path and off-path caching policy for information centric networks (ICN). In an embodiment, a computer implemented method and system is provided for cooperative on-path and off-path caching policy for information centric networks in which the edge routers or on-path routers optimally store the requested ICN contents and are supported by a strategically placed central off-path cache router for additional level of caching. A heuristic mechanism has also been provided to offload and to optimally store the contents from the on-path routers to off-path central cache router. The present scheme optimally stores the requested ICN contents either in the on-path edge routers or in strategically located off-path central cache router. The present scheme also ensures optimal formulation resulting in reduced cache duplication, delay and network usage.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: February 18, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Hemant Kumar Rath, Bighnaraj Panigrahi, Anantha Simha
  • Patent number: 10489306
    Abstract: A data processing system incorporates a cache system having a cache memory and a cache controller. The cache controller selects for cache entry eviction using a primary eviction policy. This primary eviction policy may identify a plurality of candidates for eviction with an equal preference for eviction. The cache controller provides a further selection among this plurality of candidates based upon content data read from those candidates themselves as part of the cache access operation which resulted in the cache miss leading to the cache replacement requiring the victim selection. The content data used to steer this second stage of victim selection may include transience specifying data and, for example, in the case of a cache memory comprising a translation lookaside buffer, page size data, type of translation data, memory type data, permission data and the like.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: November 26, 2019
    Assignee: ARM Limited
    Inventors: Guillaume Bolbenes, Jean-Paul Georges Poncelet
  • Patent number: 10474588
    Abstract: According to some embodiment, a backup storage system receives a request from a client for reading a data segment associated with a file object stored in a storage system. In response to the request, the system determines whether a cache hit counter associated with the data segment exceeds a cache hit threshold. The system further determines whether the data segment is associated with a file region of the file object that is frequently accessed. The system writes the data segment into a memory responsive to determining that the cache hit counter exceeds the cache hit threshold and the data segment is associated with the frequently accessed file region. Otherwise, the system writes the data segment into a solid state device (SSD) operating as a cache device.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: November 12, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Rahul B. Ugale, Satish Visvanathan
  • Patent number: 10459636
    Abstract: A system and method is described for managing mapping data in a non-volatile memory system having a volatile memory cache smaller than the update table for the mapping data. The system includes multiple mapping layers, for example two mapping layers, including a master mapping table of logical-to-physical mapping entries and an update table of mapping updates, for a non-volatile memory. A processor swaps predetermined size portions of the update mapping table and master mapping table into and out of the volatile memory cache based on host workload. The update mapping table portions may have a fixed or an adaptive logical range. Additional mapping layers, such as an expanded mapping layer having portions with a logical range greater than the logical range of the update mapping portions, may also be included and may be swapped into and out of the volatile memory with the master and update mapping table portions.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: October 29, 2019
    Assignee: SanDisk Technologies LLC
    Inventors: Marina Frid, Igor Genshaft
  • Patent number: 10382549
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed data management. One of the methods includes maintaining, by a first member in a distributed data management system having multiple computing members installed on multiple respective computers, a first garbage collection version vector that includes, for each member in the distributed data management system, a garbage collection version that represents a number of garbage collection processes performed by the member on a respective copy of a replicated data region maintained by the member in the data management system. If the first garbage collection version vector is different than a second garbage collection version vector received from a different provider member, a first replication process is performed that is different than a second replication process that is performed when the first garbage collection version vector matches the second garbage collection version vector.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: August 13, 2019
    Assignee: Pivotal Software, Inc.
    Inventors: Sumedh Wale, Neeraj Kumar, Daniel Allen Smith, Jagannathan Ramnarayanan, Suranjan Kumar, Hemant Bhanawat, Anthony M. Baker
  • Patent number: 10185666
    Abstract: Several embodiments include a method of operating a cache appliance comprising a primary memory implementing an item-wise cache and a secondary memory implementing a block cache. The cache appliance can emulate item-wise storage and eviction in the block cache by maintaining, in the primary memory, sampling data items from the block cache. The sampled items can enable the cache appliance to represent a spectrum of retention priorities. When storing a pending data item into the block cache, a comparison of the pending data item with the sampled items can enable the cache appliance to identify where to insert a block containing the pending data item. When evicting a block from the block cache, a comparison of a data item in the block with at least one of the sampled items can enable the cache appliance to determine whether to recycle/retain the data item.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: January 22, 2019
    Assignee: Facebook, Inc.
    Inventors: Jana van Greunen, Huapeng Zhou, Linpeng Tang
  • Patent number: 10152425
    Abstract: A processing system selects entries for eviction at one cache based at least in part on the validity status of corresponding entries at a different cache. The processing system includes a memory hierarchy having at least two caches, a higher level cache and a lower level cache. The lower level cache monitors which locations of the higher level cache have been indicated as invalid and, when selecting an entry of the lower level cache for eviction to the higher level cache, selects the entry based at least in part on whether the selected cache entry will be stored at an invalid cache line of the higher level cache.
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: December 11, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Paul James Moyer
  • Patent number: 10108374
    Abstract: A memory controller receives first and second write transactions from a processor and stores write data in a memory. The memory controller includes an address comparison circuit, a buffer, a level control circuit, a command generator, and a control circuit. The address comparison circuit compares second and third addresses and outputs first and second write data when the second and third addresses are consecutive. The buffer stores the first and second write data and outputs buffered data based on a control signal. The level control circuit compares a size of the buffered data with a threshold size and the size of the buffer. The command generator causes a write transaction to be executed based on the comparison results, rather than having the processor initiate the transaction, which reduces the load on the processor, and the buffered write data is stored in the memory.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: October 23, 2018
    Assignee: NXP USA, INC.
    Inventors: Harsimran Singh, Neeraj Chandak, Snehlata Gutgutia, Vivek Singh
  • Patent number: 10108517
    Abstract: Described are techniques for using resources of a data storage system. The data storage system is configured to have a first data storage configuration including a first extendable resource. The data storage system is configured to execute a virtual machine and one or more applications executing in the context of the virtual machine. One or more metrics are monitored regarding any of data storage system resource utilization and performance. It is determined whether the one or more metrics comply with specified criteria. If the one or more metrics do not comply with the specified criteria, processing is performed that includes providing a notification in connection with migrating any of the virtual machine and data used by the virtual machine from the data storage system.
    Type: Grant
    Filed: June 27, 2011
    Date of Patent: October 23, 2018
    Assignee: EMC IP Holding Company LLC
    Inventors: Oleg Alexandrovich Efremov, Artem Akopovich Zarafyants, Sergey Zaporozhtsev, Mark A. Parenti
  • Patent number: 10073785
    Abstract: In a processing system comprising a cache, a method includes monitoring demand cache accesses for a thread to maintain a first running count of a number of times demand cache accesses for the thread are directed to cachelines that are adjacent in a first direction to cachelines that are targets of a set of sampled cache accesses for the thread. In response to determining the first running count has exceeded a first threshold, the method further includes enabling a first prefetching mode in which a received demand cache access for the thread triggers a prefetch request for a cacheline adjacent in the first direction to a cacheline targeted by the received demand cache access.
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: September 11, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: William Evan Jones, III