Combined Replacement Modes Patents (Class 711/134)
-
Patent number: 12242424Abstract: A memory tier including persistent memory (PMEM) devices is established in nodes of a cluster system having a deduplicated file system. At least a portion of metadata generated by the deduplicated file system is persisted to the memory tier. The portion of metadata includes an index of fingerprints corresponding to data segments stored by the deduplicated file system to a storage pool. A determination is made that an instance of the deduplicated file system has failed. A new instance of the deduplicated file system is started to recover file system services by loading the index of fingerprints from the memory tier.Type: GrantFiled: June 10, 2021Date of Patent: March 4, 2025Assignee: EMC IP Holding Company LLCInventors: Yong Zou, Rahul Ugale
-
Patent number: 12242346Abstract: Global column repair with local column decoder circuitry and related apparatuses, methods, and computing systems are disclosed. An apparatus includes global column repair circuitry including column address drivers corresponding to respective ones of column planes of a memory array. The column address drivers are configured to, if enabled, drive a received column address signal to local column decoder circuitry local to respective ones of the column planes. The global column repair circuitry also includes match circuitry and data storage elements configured to store defective column addresses corresponding to defective column planes. The match circuitry is configured to compare a received column address indicated by the received column address signal to the defective column addresses and disable a column address driver corresponding to a defective column plane responsive to a determination that the received column address matches a defective column address associated with the defective column plane.Type: GrantFiled: October 4, 2022Date of Patent: March 4, 2025Assignee: Micron Technology, Inc.Inventors: Christopher G. Wieduwilt, Fatma Arzum Simsek-Ege
-
Patent number: 12182036Abstract: Providing content-aware cache replacement and insertion policies in processor-based devices is disclosed. In some aspects, a processor-based device comprises a cache memory device and a cache controller circuit of the cache memory device. The cache controller circuit is configured to determine a plurality of content costs for each of a plurality of cached data values in the cache memory device, based on a plurality of bit values of each of the plurality of cached data values. The cache controller circuit is configured to identify, based on the plurality of content costs, a cached data value of the plurality of cached data values associated with a lowest content cost as a target cached data value. The cache controller circuit is also configured to evict the target cached data value from the cache memory device.Type: GrantFiled: February 2, 2023Date of Patent: December 31, 2024Assignee: QUALCOMM IncorporatedInventors: George Patsilaras, Engin Ipek, Goran Goran, Hamza Omar, Bohuslav Rychlik, Jeffrey Gemar, Matthew Severson, Andrew Edmund Turner
-
Patent number: 12182035Abstract: Systems and methods for improving cache efficiency and utilization are disclosed. In one embodiment, a graphics processor includes processing resources to perform graphics operations and a cache controller of a cache memory that is coupled to the processing resources. The cache controller is configured to set an initial aging policy using an aging field based on age of cache lines within the cache memory and to determine whether a hint or an instruction to indicate a level of aging has been received.Type: GrantFiled: March 14, 2020Date of Patent: December 31, 2024Assignee: Intel CorporationInventors: Altug Koker, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Abhishek Appu, Aravindh Anantaraman, Valentin Andrei, Durgaprasad Bilagi, Varghese George, Brent Insko, Sanjeev Jahagirdar, Scott Janus, Pattabhiraman K, SungYe Kim, Subramaniam Maiyuran, Vasanth Ranganathan, Lakshminarayanan Striramassarma, Xinmin Tian
-
Patent number: 12164417Abstract: A memory controller configured to control a dynamic random access memory (DRAM) includes a first control circuit and a second control circuit. The first control circuit is configured to store a request received by the memory controller in a first storage circuit, and select a request from all requests stored in the first storage circuit. The second control circuit is configured to store the request selected by the first control circuit in a second storage circuit, reorder requests stored in the second storage circuit, generate a DRAM command, and issue the DRAM command to the DRAM. The first control circuit is configured to select the request based on target banks and target pages of the requests stored in the second storage circuit, and a state of a bank or page of the DRAM.Type: GrantFiled: April 12, 2022Date of Patent: December 10, 2024Assignee: Canon Kabushiki KaishaInventors: Motohisa Ito, Daisuke Shiraishi
-
Patent number: 12147704Abstract: A data storage device having a flash translation layer configured to handle file-system defragmentation in a manner that avoids, reduces, and/or optimizes physical data movement in flash memory. In an example embodiment, the memory controller maintains in a volatile memory thereof a lookaside table that supplants pertinent portions of the logical-to-physical table. Entries of the lookaside table are configured to track source and destination addresses of the host defragmentation requests and are logically linked to the corresponding entries of the logical-to-physical table such that end-to-end data protection including the use of logical-address tags to the user data can be supported by logical means and without physical data rearrangement in the flash memory. In some embodiments, physical data rearrangement corresponding to the file-system defragmentation is performed in the flash memory in response to certain trigger events, which can improve the input/output performance of the data-storage device.Type: GrantFiled: July 15, 2022Date of Patent: November 19, 2024Assignee: Sandisk Technologies, Inc.Inventors: Judah Gamliel Hahn, Ramanathan Muthiah, Bala Siva Kumar Narala, Narendhiran Chinnaanangur Ravimohan
-
Patent number: 12131060Abstract: Exemplary methods, apparatuses, and systems include a quick charge loss (QCL) mitigation manager for controlling writing data bits to a memory device. The QCL mitigation manager receives a first set of data bits for programming to memory. The QCL mitigation manager writes a first subset of data bits of the first set of data bits to a first memory block of the memory during a first pass of programming. The QCL mitigation manager writes a second subset of data bits of the first set of data bits to the first memory block during a second pass of programming in response to determining that the threshold delay is satisfied.Type: GrantFiled: July 25, 2022Date of Patent: October 29, 2024Assignee: MICRON TECHNOLOGY, INC.Inventors: Kishore Kumar Muchherla, Dung V. Nguyen, Dave Scott Ebsen, Tomoharu Tanaka, James Fitzpatrick, Huai-Yuan Tseng, Akira Goda, Eric N. Lee
-
Patent number: 12099750Abstract: A data storage device having a flash translation layer configured to handle file-system defragmentation in a manner that avoids, reduces, and/or optimizes physical data movement in flash memory. In an example embodiment, the memory controller maintains in a volatile memory thereof a lookaside table that supplants pertinent portions of the logical-to-physical table. Entries of the lookaside table are configured to track source and destination addresses of the host defragmentation requests and are logically linked to the corresponding entries of the logical-to-physical table such that end-to-end data protection including the use of logical-address tags to the user data can be supported by logical means and without physical data rearrangement in the flash memory. In some embodiments, physical data rearrangement corresponding to the file-system defragmentation is performed in the flash memory in response to certain trigger events, which can improve the input/output performance of the data-storage device.Type: GrantFiled: July 15, 2022Date of Patent: September 24, 2024Assignee: Sandisk Technologies, Inc.Inventors: Judah Gamliel Hahn, Ramanathan Muthiah, Bala Siva Kumar Narala, Narendhiran Chinnaanangur Ravimohan
-
Patent number: 12072801Abstract: An operating method of a storage device, the method including; loading journal data from a non-volatile memory device, identifying a cache allocation flag included in the journal data, and restoring meta data corresponding to the journal data to a storage controller in response to the cache allocation flag. Here, the cache allocation flag is a first flag when the meta data are allocated to a meta cache of the storage controller, and the cache allocation flag is a second flag when the meta data are stored to a meta buffer of the storage controller.Type: GrantFiled: October 25, 2022Date of Patent: August 27, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Tae-Hwan Kim
-
Patent number: 12066944Abstract: A coherency management device receives requests to read data from or write data to an address in a main memory. On a write, if the data includes zero data, an entry corresponding to the memory address is created in a cache directory if it does not already exist, is set to an invalid state, and indicates that the data includes zero data. The zero data is not written to main memory or a cache. On a read, the cache directory is checked for an entry corresponding to the memory address. If the entry exists in the cache directory, is invalid, and includes an indication that data corresponding to the memory address includes zero data, the coherency management device returns zero data in response to the request without fetching the data from main memory or a cache.Type: GrantFiled: December 20, 2019Date of Patent: August 20, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte
-
Patent number: 12014410Abstract: Methods and systems for providing content and content recommendations are described. Content recommendations for users in a service area may be based on content requests from users within the same service area. Content recommendations over a period of time may dynamically change based on many factors, including: the number of requests for particular content within one or more time frames, the storage location of requested content relative to requesting users, and whether a request occurs within a time period of rapid shift in use and types of content requested. A content recommendation including one or more content recommendations may be provided to users.Type: GrantFiled: January 29, 2013Date of Patent: June 18, 2024Assignee: Comcast Cable Communications, LLCInventor: Tom Brown
-
Patent number: 12007896Abstract: Apparatuses, systems, and methods for configuring combined private and shared cache levels in a processor-based system. The processor-based system includes a processor that includes a plurality of processing cores each including execution circuits which are coupled to respective cache(s) and a configurable combined private and shared cache, and which may receive instructions and data on which to perform operations from the cache(s) and the combined private and shared cache. A shared cache portion of each configurable combined private and shared cache can be treated as an independently-assignable portion of the overall shared cache, which is effectively the shared cache portions of all of the processing cores. Each independently-assignable portion of the overall shared cache can be associated with a particular client running on the processor as an example. This approach can provide greater granularity of cache partitioning of a shared cache between particular clients running on a processor.Type: GrantFiled: June 7, 2022Date of Patent: June 11, 2024Assignee: Ampere Computing LLCInventors: Richard James Shannon, Stephan Jean Jourdan, Matthew Robert Erler, Jared Eric Bendt
-
Patent number: 11995033Abstract: Apparatus and methods receive input descriptive of a retention policy; evaluate one or more datasets against the retention policy to determine one or more deletable data elements in the one or more datasets; and delete the one or more deletable data elements from a data store.Type: GrantFiled: September 16, 2020Date of Patent: May 28, 2024Assignee: Palantir Technologies Inc.Inventors: Rahij Ramsharan, Alexis Daboville
-
Patent number: 11977738Abstract: There is provided an apparatus, method and medium. The apparatus comprises a store buffer to store a plurality of store requests, where each of the plurality of store requests identifies a storage address and a data item to be transferred to storage beginning at the storage address, where the data item comprises a predetermined number of bytes. The apparatus is responsive to a memory access instruction indicating a store operation specifying storage of N data items, to determine an address allocation order of N consecutive store requests based on a copy direction hint indicative of whether the memory access instruction is one of a sequence of memory access instructions each identifying one of a sequence of sequentially decreasing addresses, and to allocate the N consecutive store requests to the store buffer in the address allocation order.Type: GrantFiled: September 6, 2022Date of Patent: May 7, 2024Assignee: Arm LimitedInventors: Abhishek Raja, Yasuo Ishii
-
Patent number: 11893283Abstract: Apparatuses and methods can be related to generating an asynchronous process topology in a memory device. The topology can be generated based on the results of a number of processes. The processes can be asynchronous given that the processing resources that implement the processes do not use a clock signal to generate the topology.Type: GrantFiled: June 27, 2022Date of Patent: February 6, 2024Assignee: Micron Technology, Inc.Inventors: Glen E. Hush, Richard C. Murphy, Honglin Sun
-
Patent number: 11868692Abstract: Address generators for use in verifying an integrated circuit hardware design for an n-way set associative cache. The address generator is configured to generate, from a reverse hashing algorithm matching the hashing algorithm used by the n-way set associative cache, a list of cache set addresses that comprises one or more addresses of the main memory corresponding to each of one or more target sets of the n-way set associative cache. The address generator receives requests for addresses of main memory from a driver to be used to generate stimuli for testing an instantiation of the integrated circuit hardware design for the n-way set associative cache. In response to receiving a request the address generator provides an address from the list of cache set addresses.Type: GrantFiled: April 2, 2021Date of Patent: January 9, 2024Assignee: Imagination Technologies LimitedInventors: Anthony Wood, Philip Chambers
-
Patent number: 11868221Abstract: Techniques for performing cache operations are provided. The techniques include tracking performance events for a plurality of test sets of a cache, detecting a replacement policy change trigger event associated with a test set of the plurality of test sets, and in response to the replacement policy change trigger event, operating non-test sets of the cache according to a replacement policy associated with the test set.Type: GrantFiled: September 30, 2021Date of Patent: January 9, 2024Assignee: Advanced Micro Devices, Inc.Inventors: John Kelley, Vanchinathan Venkataramani, Paul J. Moyer
-
Patent number: 11868264Abstract: One embodiment provides circuitry coupled with cache memory and a memory interface, the circuitry to compress compute data at multiple cache line granularity, and a processing resource coupled with the memory interface and the cache memory. The processing resource is configured to perform a general-purpose compute operation on compute data associated with multiple cache lines of the cache memory. The circuitry is configured to compress the compute data before a write of the compute data via the memory interface to the memory bus, in association with a read of the compute data associated with the multiple cache lines via the memory interface, decompress the compute data, and provide the decompressed compute data to the processing resource.Type: GrantFiled: February 13, 2023Date of Patent: January 9, 2024Assignee: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
-
Patent number: 11841796Abstract: Methods, systems, and devices for scratchpad memory in a cache are described. A device may operate a portion of a volatile memory in a cache mode having non-deterministic latency for satisfying requests from a host device. The device may monitor a register with an output pin that is associated with the portion and indicative of an operating mode of the portion. Based on or in response to monitoring the output pin, the device may determine whether to change the operating mode of the portion from the cache mode to a scratchpad mode having deterministic latency for satisfying requests from the host device.Type: GrantFiled: January 5, 2022Date of Patent: December 12, 2023Assignee: Micron Technology, Inc.Inventors: Chinnakrishnan Ballapuram, Saira Samar Malik, Taeksang Song
-
Patent number: 11841798Abstract: Circuitry comprises processing circuitry to access a hierarchy of at least two levels of cache memory storage; memory circuitry comprising plural storage elements, at least some of the storage elements being selectively operable as cache memory storage in respective different cache functions; and control circuitry to allocate storage elements of the memory circuitry for operation according to a given cache function.Type: GrantFiled: August 9, 2021Date of Patent: December 12, 2023Assignee: Arm LimitedInventor: Daren Croxford
-
Patent number: 11831557Abstract: A system and method for soft locking for a networking device in a network, such as a network-on-chip (NoC). Once a soft lock is established, the port and packet are given transmitting priority so long has the port has an available packet or packet parts that can make forward progress in the network. When the soft lock port's packet parts are not available, the networking device may choose another port and/or another packet. Any arbitration scheme may be used. Once the packet (or all the packet parts) has completed transmission, the soft lock is released.Type: GrantFiled: June 8, 2022Date of Patent: November 28, 2023Assignee: ARTERIS, INC.Inventors: John Coddington, Benoit de Lescure, Syed Ijlal Ali Shah, Sanjay Despande
-
Patent number: 11768772Abstract: In some examples, a system includes a processing entity and a memory to store data arranged in a plurality of bins associated with respective key values of a key. The system includes a cache to store cached data elements for respective accumulators that are updatable to represent occurrences of the respective key values of the key, where each accumulator corresponds to a different bin of the plurality of bins, and each cached data element has a range that is less than a range of a corresponding bin of the plurality of bins. Responsive to a value of a given cached data element as updated by a given accumulator satisfying a criterion, the processing entity is to cause an aggregation of the value of the given cached data element with a bin value in a respective bin.Type: GrantFiled: December 15, 2021Date of Patent: September 26, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Ryan D. Menhusen, Darel Neal Emmot
-
Patent number: 11726920Abstract: A device includes a cache memory and a memory controller coupled to the cache memory. The memory controller is configured to receive a first read request from a cache controller over an interconnect, the first read request comprising first tag data identifying a first cache line in the cache memory, and determine that the first read request comprises a tag read request. The memory controller is further configured to read second tag data corresponding to the tag read request from the cache memory, compare the second tag data read from the cache memory to the first tag data received from the cache controller with the first read request, and if the second tag data matches the first tag data, initiate an action with respect to the first cache line in the cache memory.Type: GrantFiled: June 26, 2019Date of Patent: August 15, 2023Assignee: Rambus Inc.Inventors: Michael Miller, Dennis Doidge, Collins Williams
-
Patent number: 11726699Abstract: One embodiment provides a system which facilitates data management. The system receives, by a storage device via read requests from multiple streams, a first plurality of logical block addresses (LBAs) and corresponding stream identifiers. The system assigns a respective LBA to a first queue of a plurality of queues based on the stream identifier corresponding to the LBA. Responsive to determining that a second plurality of LBAs in the first queue are of a sequentially similar pattern: the system retrieves, from a non-volatile memory of the storage device, data associated with the second plurality of LBAs; and the system stores the retrieved data and the second plurality of LBAs in a volatile memory of the storage device while bypassing data-processing operations.Type: GrantFiled: March 30, 2021Date of Patent: August 15, 2023Assignee: ALIBABA SINGAPORE HOLDING PRIVATE LIMITEDInventor: Shu Li
-
Patent number: 11715541Abstract: A method includes associating each block of a plurality of blocks of a memory device with a corresponding frequency access group of a plurality of frequency access groups based on corresponding access frequencies, and performing scan operations on blocks of each of the plurality of frequency access groups using a scan frequency that is different from scan frequencies of other frequency access groups. A scan operation performed on a frequency access group with a higher access frequency uses a higher scan frequency than a scan operation performed on a frequency access group with a lower access frequency.Type: GrantFiled: July 18, 2022Date of Patent: August 1, 2023Assignee: Micron Technology, Inc.Inventors: Renato C. Padilla, Sampath K. Ratnam, Christopher M. Smitchger, Vamsi Pavan Rayaprolu, Gary F. Besinga, Michael G. Miller, Tawalin Opastrakoon
-
Patent number: 11611348Abstract: Techniques are provided for implementing a file system format for persistent memory. A node, with persistent memory, receives an operation associated with a file identifier and file system instance information. A list of file system info objects are evaluated to identify a file system info object matching the file system instance information. An inofile, identified by the file system info object as being associated with inodes of files within an instance of the file system targeted by the operation, is traversed to identify an inode matching the file identifier. If the inode has an indicator that the file is tiered into the persistent memory, then the inode it utilized to facilitate execution of the operation upon the persistent memory. Otherwise, the operation is routed to a storage file system tier for execution by a storage file system upon storage associated with the node.Type: GrantFiled: July 1, 2021Date of Patent: March 21, 2023Assignee: NetApp, Inc.Inventors: Ram Kesavan, Matthew Fontaine Curtis-Maury, Abdul Basit, Vinay Devadas, Ananthan Subramanian, Mark Smith
-
Patent number: 11593269Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.Type: GrantFiled: August 12, 2021Date of Patent: February 28, 2023Assignee: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
-
Patent number: 11586548Abstract: In an example, an apparatus comprises a plurality of execution units, and a cache memory communicatively coupled to the plurality of execution units, wherein the cache memory is structured into a plurality of sectors, wherein each sector in the plurality of sectors comprises at least two cache lines. Other embodiments are also disclosed and claimed.Type: GrantFiled: March 3, 2021Date of Patent: February 21, 2023Assignee: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
-
Patent number: 11544093Abstract: Examples herein relate to checkpoint replication and copying of updated checkpoint data. For example, a memory controller coupled to a memory can receive a write request with an associated address to write or update checkpoint data and track updates to checkpoint data based on at least two levels of memory region sizes. A first level is associated with a larger memory region size than a memory region size associated with the second level. In some examples, the first level is a cache-line memory region size and the second level is a page memory region size. Updates to the checkpoint data can be tracked at the second level unless an update was previously tracked at the first level. Reduced amounts of updated checkpoint data can be transmitted during a checkpoint replication by using multiple region size trackers.Type: GrantFiled: September 27, 2019Date of Patent: January 3, 2023Assignee: Intel CorporationInventors: Zhe Wang, Andrew V. Anderson, Alaa R. Alameldeen, Andrew M. Rudoff
-
Patent number: 11513996Abstract: An index associates fingerprints of file segments to container numbers of containers within which the file segments are stored. At a start of migration, a boundary is created identifying a current container number. At least a subset of file segments at a source storage tier are packed into a new container to be written to a destination storage tier. A new container number is generated for the new container. The index is updated to associate fingerprints of the at least subset of file segments to the new container number. A request is received to read a file segment. The index is queried with a fingerprint of the file segment to determine whether the request should be directed to the source or destination storage tier based on a container number of a container within which the file segment is stored.Type: GrantFiled: July 14, 2021Date of Patent: November 29, 2022Assignee: EMC IP Holding Company LLCInventors: Neeraj Bhutani, Ramprasad Chinthekindi, Nitin Madan, Srikanth Srinivasan
-
Patent number: 11481143Abstract: Metadata of extent-based storage systems can be managed. For example, a computing device can store a first metadata object and a second metadata object in a first memory device. The first metadata object can specify locations of a first set of extents corresponding to a first data unit stored in a second memory device. The second metadata object can specify locations of a second set of extents corresponding to a second data unit stored in the second memory device. The computing device can determine that a first size of the first metadata object is smaller than a second size of the second metadata object. The computing device can remove the second metadata object from the first memory device based on determining that the first size is less than the second size.Type: GrantFiled: November 10, 2020Date of Patent: October 25, 2022Assignee: RED HAT, INC.Inventors: Gabriel Zvi BenHanokh, Joshua Durgin
-
Patent number: 11474738Abstract: Exemplary methods, apparatuses, and systems include receiving a plurality of read operations directed to a portion of memory accessed by a memory channel. The plurality of read operations are divided into a current set of a sequence of read operations and one or more other sets of sequences of read operations. An aggressor read operation is selected from the current set. A supplemental memory location is selected independently of aggressors and victims in the current set of read operations. A first data integrity scan is performed on a victim of the aggressor read operation and a second data integrity scan is performed on the supplemental memory location.Type: GrantFiled: April 15, 2021Date of Patent: October 18, 2022Assignee: MICRON TECHNOLOGY, INC.Inventors: Saeed Sharifi Tehrani, Ashutosh Malshe, Kishore Kumar Muchherla, Sivagnanam Parthasarathy, Vamsi Pavan Rayaprolu
-
Patent number: 11455122Abstract: Provided is a storage system in which a compression rate of randomly written data can be increased and access performance can be improved. A storage controller 22A includes a cache area 203A configured to store data to be read out of or written into a drive 29. The controller 22A groups a plurality of pieces of data stored in the cache area 203A and input into the drive 29 based on a similarity degree among the pieces of data, selects a group, compresses data of the selected group in group units, and stores the compressed data in the drive 29.Type: GrantFiled: August 14, 2020Date of Patent: September 27, 2022Assignee: HITACHI, LTD.Inventors: Nagamasa Mizushima, Tomohiro Yoshihara, Kentaro Shimada
-
Patent number: 11405358Abstract: The application includes a data processing device and method. In an embodiment, the data processing device includes a data collection unit, configured to collect data transmitted in a network, and divide the collected data, according to a predetermined feature, into known attack data and unknown attack data. The data processing device further includes a data conversion unit, configured to replace, according to a mapping database, at least a portion of the content included in the unknown attack data with corresponding identification codes. Therefore, the size of data transmitted in the network can be reduced.Type: GrantFiled: March 1, 2017Date of Patent: August 2, 2022Assignee: SIEMENS AKTIENGESELLSCHAFTInventors: Dai Fei Guo, Xi Feng Liu
-
Patent number: 11372779Abstract: A memory page management method is provided. The method includes receiving a state-change notification corresponding to a state-change page, and grouping the state-change page from a list to which the state-change page belongs into a keep list or an adaptive LRU list of an adaptive adjusting list according to the state-change notification; receiving an access command from a CPU to perform an access operation to target page data corresponding to a target page; determining that a cache hit state is a hit state or a miss state according to a target NVM page address corresponding to the target page, and grouping the target page into the adaptive LRU list according to the cache hit state; and searching the adaptive page list according to the target NVM page address to obtain a target DRAM page address to complete the access command corresponding to the target page data.Type: GrantFiled: May 30, 2019Date of Patent: June 28, 2022Assignees: Industrial Technology Research Institute, National Taiwan UniversityInventors: Che-Wei Tsao, Tei-Wei Kuo, Yuan-Hao Chang, Tzu-Chieh Shen, Shau-Yin Tseng
-
Patent number: 11327843Abstract: Provided are an apparatus and method for managing data storage. A first log structured array stores data in a storage device. A second log structured array in the storage device stores metadata for the data in the first log structured array, wherein the second log structured array storing the metadata for the first log structured data storage system is nested within the first log structured array, and wherein the first and second log structured arrays comprise separate instances of log structured arrays. Address space is allocated in the second log structured array for metadata when the allocation of address space is required for metadata for data stored in the first log structured array.Type: GrantFiled: August 5, 2019Date of Patent: May 10, 2022Assignee: International Business Machines CorporationInventors: Henry Esmond Butterworth, Ian David Judd
-
Patent number: 11301395Abstract: A method for characterizing workload sequentiality for cache policy optimization includes maintaining an IO trace data structure having a rolling window of IO traces describing access operations on addresses of a storage volume. A page count data structure is maintained that includes a list of all of the addresses of the storage volume referenced by the IO traces in the IO trace data structure. A list of sequences data structure is maintained that contains a list of all sequences of the addresses of the storage volume that were accessed by the IO traces in the IO trace data structure. A sequence lengths data structure is used to correlate each sequence in the list of sequences data structure with a length of the sequence, and a histogram data structure is used to correlate sequence lengths and a number of how many of sequences of each length are maintained in the sequence lengths data structure.Type: GrantFiled: November 8, 2019Date of Patent: April 12, 2022Assignee: Dell Products, L.P.Inventors: Hugo de Oliveira Barbalho, VinÃcius Michel Gottin, Rômulo Teixeira de Abreu Pinho
-
Patent number: 11294829Abstract: A method configures a cache to implement a LRU management technique. The cache has N entries divided into B buckets. Each bucket has a number of entries equal to P entries*M vectors, wherein N=B*P*M. Any P entry within any M vector is ordered using an in-vector LRU ordering process. Any entry within any bucket is ordered in LRU within the vectors and buckets. The LRU management technique moves a found entry to a first position within a same M vector, responsive to a lookup for a specified key, and permutes the found entry and a last entry in a previous M vector, responsive to the found entry already being in the first position within a vector and the same one of the M vectors not being a first vector in the bucket in the moving step.Type: GrantFiled: April 21, 2020Date of Patent: April 5, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: Hiroshi Inoue
-
Patent number: 11287274Abstract: Technologies are provided for memory management for route optimization algorithms. An example method can include determining a cost surface of a route-based project associated with an area, the cost surface including nodes comprising costs associated with respective locations within the area; determining whether a cache has data of each neighbor of a current node being processed to determine a least-cost path from a start node to an end node; obtaining, from the memory cache, the data of each neighbor; for each particular neighbor that is not a boundary node in the cost surface, determining a projected cost of the particular neighbor based on an accumulated cost of the particular neighbor and an additional cost estimated based on a distance between the particular neighbor and the end node; and based on the projected cost of each particular neighbor, determining the least-cost path from the start node to the end node.Type: GrantFiled: July 20, 2021Date of Patent: March 29, 2022Assignee: IMERCATUS HOLDINGS LLCInventors: Jose Mejia Robles, Scott L. Gowdish
-
Patent number: 11221957Abstract: A method, computer program product, and a computer system are disclosed for processing information in a processor that in one or more embodiments includes receiving a request for an Effective Address to Real Address Translation (ERAT); determining whether there is a permissions miss; changing, in response to determining there is a permission miss, permissions of an ERAT cache entry; and providing a Real Address translation. The method, computer program product, and computer system may optionally include providing a promote checkout request to a memory management unit (MMU).Type: GrantFiled: August 31, 2018Date of Patent: January 11, 2022Assignee: International Business Machines CorporationInventors: Bartholomew Blaner, Jay G. Heaslip, Benjamin Herrenschmidt, Robert D. Herzl, Jody Joyner, Jon K. Kriegel, Charles D. Wait
-
Patent number: 11169919Abstract: A method for improving cache hit ratios for selected volumes within a storage system includes monitoring I/O to multiple volumes residing on a storage system. The method determines, from the I/O, which particular volumes of the multiple volumes would benefit the most if provided favored status in cache of the storage system, where the favored status provides increased residency time in the cache compared to volumes not having the favored status. The method determines, from the I/O, an amount by which the increased residency time should exceed a residency time of volumes not having the favored status. The method generates an indicator that is representative of the amount and transmits this indicator to the storage system. The storage system, in turn, provides increased residency time to the particular volumes in accordance with the favored status and indicator. A corresponding system and computer program product are also disclosed.Type: GrantFiled: May 12, 2019Date of Patent: November 9, 2021Assignee: International Business Machines CorporationInventors: Lokesh M. Gupta, Beth A. Peterson, Kyler A. Anderson, Kevin J. Ash
-
Patent number: 10970218Abstract: The present disclosure includes apparatuses and methods for compute enabled cache. An example apparatus comprises a compute component, a memory and a controller coupled to the memory. The controller configured to operate on a block select and a subrow select as metadata to a cache line to control placement of the cache line in the memory to allow for a compute enabled cache.Type: GrantFiled: August 5, 2019Date of Patent: April 6, 2021Assignee: Micron Technology, Inc.Inventor: Richard C. Murphy
-
Patent number: 10949360Abstract: The information processing apparatus is provided with a plurality of arithmetic devices, a memory unit shared by the plurality of arithmetic devices, and a cache device. The cache device divides the memory space of the memory unit into a plurality of regions, and includes a plurality of caches in the same hierarchy, each of which is associated with a respective one of the plurality of regions. Each cache includes a cache core configured to exclusively store data from a respective one of the plurality of regions.Type: GrantFiled: September 30, 2016Date of Patent: March 16, 2021Assignee: MITSUBISHI ELECTRIC CORPORATIONInventor: Seidai Takeda
-
Patent number: 10942866Abstract: Disclosed are systems and methods for using a priority cache to store frequently used data items by an application. The priority cache may include multiple cache regions. Each of the cache regions may be associated with a different priority level. When a data item is to be stored in the priority cache, the application may review the context of the data item to determine if the data item may be used again in the near future. Based on that determination, the application may be configured to assign a priority level to the data item. The data item may then be stored in the appropriate cache region according to its assigned priority level.Type: GrantFiled: March 21, 2014Date of Patent: March 9, 2021Assignee: EMC IP Holding Company LLCInventor: Dennis Holmes
-
Patent number: 10915444Abstract: A processing device in a memory system determines whether a first data block of a plurality of data blocks on the memory component satisfies a first threshold criterion pertaining to a first number of the plurality of data blocks having a lower amount of valid data than a remainder of the plurality of data blocks. Responsive to the first data block satisfying the first threshold criterion, the processing device determines whether the first data block satisfies a second threshold criterion pertaining to a second number of the plurality of data blocks having been written to more recently than the remainder of the plurality of data blocks. Responsive to the first data block satisfying the second threshold criterion, the processing device determines whether a rate of change of an amount of valid data on the first data block satisfies a third threshold criterion.Type: GrantFiled: December 27, 2018Date of Patent: February 9, 2021Assignee: MICRON TECHNOLOGY, INC.Inventors: Kishore Kumar Muchherla, Sampath K. Ratnam, Ashutosh Malshe, Peter Sean Feeley
-
Patent number: 10901906Abstract: This disclosure provides a method, a computing system and a computer program product for allocating write data in a storage system. The storage system comprises a Non-Volatile Write Cache (NVWC) and a backend storage subsystem, and the write data comprises first data whose addresses are not in the NVWC. The method includes checking fullness of the NVWC, and determining at least one of a write-back mechanism or a write-through mechanism as a write mode for the first data based on the checked fullness.Type: GrantFiled: August 7, 2018Date of Patent: January 26, 2021Assignee: International Business Machines CorporationInventors: Gang Lyu, Hui Zhang
-
Patent number: 10901915Abstract: Systems, apparatuses, and methods may provide for an eventually-consistent distributed caching mechanism for database systems. As an example, the system may include a recently updated objects (RUO) manager, which may store object identifiers of recently updated objects and RUO time-to-live values of the object identifiers. As servers read objects from the cache or write objects into the cache, the servers may also check the RUO manager to determine if the object has been updated recently enough to be at risk of being stale or outdated. If so, the servers may invalidate the object stored at the cache as it may be stale, which results in eventual consistency across the distributed database system.Type: GrantFiled: June 28, 2018Date of Patent: January 26, 2021Assignee: Comcast Cable Communications, LLCInventors: Christopher Orogvany, Mark Perry, Bradley W. Jacobs
-
Patent number: 10860257Abstract: An information processing apparatus includes a RAM; a non-volatile memory storing setting information in which a compression method is set for each of a plurality of RAM disks, the setting information including a plurality of compression methods; and circuitry. The circuitry is configured to create each of the plurality of RAM disks with the compression method mounted, in the RAM, according to the setting information; request writing and reading of data from an application; write the data into a corresponding RAM disk of the plurality of RAM disks corresponding to the application, in response to a writing request of the data from the application; and compress the data in the compression method mounted on the corresponding RAM disk.Type: GrantFiled: September 25, 2018Date of Patent: December 8, 2020Assignee: Ricoh Company, Ltd.Inventors: Daiki Sakurada, Hideaki Yamamoto, Hiroyuki Ishihara, Tomoe Kitaguchi, Ryuta Aoki
-
Patent number: 10831661Abstract: Processing simultaneous data requests regardless of active request in the same addressable index of a cache. In response to the cache miss in the given congruence, if the number of other compartments in the given congruence class that have an active operation is less than a predetermined threshold, setting a Do Not Cast Out (DNCO) pending indication for each of the compartments that have an active operation in order to block access to each of the other compartments that have active operations and, if the number of other compartments in the given congruence class that have an active operation is not less than a predetermined threshold, blocking another cache miss from occurring in the compartments of the given congruence class by setting a congruence class block pending indication for the given congruence class in order to block access to each of the other compartments of the given congruence class.Type: GrantFiled: April 10, 2019Date of Patent: November 10, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ekaterina M. Ambroladze, Tim Bronson, Robert J. Sonnelitter, III, Deanna P. D. Berger, Chad G. Wilson, Kenneth Douglas Klapproth, Arthur O'Neill, Michael A. Blake, Guy G. Tracy
-
Patent number: 10713081Abstract: Secure and efficient memory sharing for guests is disclosed. For example, a host has a host memory storing first and second guests whose memory access is managed by a hypervisor. A request to map an IOVA of the first guest to the second guest is received, where the IOVA is mapped to a GPA of the first guest, which is is mapped to an HPA of the host memory. The HPA is mapped to a second GPA of the second guest, where the hypervisor controls access permissions of the HPA. The second GPA is mapped in a second page table of the second guest to a GVA of the second guest, where a supervisor of the second guest controls access permissions of the second GPA. The hypervisor enables a program executing on the second guest to access contents of the HPA based on the access permissions of the HPA.Type: GrantFiled: August 30, 2018Date of Patent: July 14, 2020Assignee: RED HAT, INC.Inventors: Michael Tsirkin, Stefan Hajnoczi