Cache Flushing Patents (Class 711/135)
  • Patent number: 11544199
    Abstract: A cache memory is disclosed. The cache memory includes an instruction memory portion having a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a tag memory portion having a plurality of tag memory locations configured to store tag data encoding a plurality of RAM memory address ranges the CPU instructions are stored in. The instruction memory portion includes a single memory circuit having an instruction memory array and a plurality of instruction peripheral circuits communicatively connected with the instruction memory array. The tag memory portion includes a plurality of tag memory circuits, where each of the tag memory circuits includes a tag memory array, and a plurality of tag peripheral circuits communicatively connected with the tag memory array.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: January 3, 2023
    Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.
    Inventors: Bassam S. Kamand, Waleed Younis, Ramon Zuniga, Jagadish Rongali
  • Patent number: 11513737
    Abstract: Data overflows can be prevented in edge computing systems. For example, an edge computing system (ECS) can include a memory buffer for storing incoming data from client devices. The ECS can also include a local storage device. The ECS can determine that an amount of available storage space in the local storage device is less than a predefined threshold amount. Based on determining that the amount of available storage space is less than the predefined threshold amount, the ECS can prevent the incoming data from being retrieved from the memory buffer. And based on determining that the amount of available storage space is greater than or equal to the predefined threshold amount, the ECS can retrieve the incoming data from the memory buffer and store the incoming data in the local storage device. This may prevent data overflows associated with the local storage device.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: November 29, 2022
    Assignee: RED HAT, INC.
    Inventors: Yehuda Sadeh-Weinraub, Huamin Chen, Ricardo Noriega De Soto
  • Patent number: 11513798
    Abstract: A system and method are provided for simplifying load acquire and store release semantics that are used in reduced instruction set computing (RISC). Translating the semantics into micro-operations, or low-level instructions used to implement complex machine instructions, can avoid having to implement complicated new memory operations. Using one or more data memory barrier operations in conjunction with load and store operations can provide sufficient ordering as a data memory barrier ensures that prior instructions are performed and completed before subsequent instructions are executed.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: November 29, 2022
    Assignee: Ampere Computing LLC
    Inventors: Matthew Ashcraft, Christopher Nelson
  • Patent number: 11507508
    Abstract: The present disclosure generally relates to improving write cache utilization by recommending a time to initiate a data flush operation or predicting when a new write command will arrive. The recommending can be based upon considerations such as a hard time limit for data caching, rewarding for filling the cache, and penalizing for holding data for too long. The predicting can be based on tracking write command arrivals and then, based upon the tracking, predicting an estimated arrival time for the next write command. Based upon the recommendation or predicting, the write cache can be flushed or the data can remain in the write cache to thus more efficiently utilize the write cache without violating a hard stop time limit.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: November 22, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Shay Benisty, Ariel Navon
  • Patent number: 11507500
    Abstract: A storage system includes a host including a processor and a memory unit, and a storage device including a controller and a non-volatile memory unit. The processor is configured to output a write command, write data, and size information of the write data, to the storage device, the write command that is output not including a write address. The controller is configured to determine a physical write location of the non-volatile memory unit in which the write data are to be written, based on the write command and the size information, write the write data in the physical write location of the non-volatile memory unit, and output the physical write location to the host. The processor is further configured to generate, in the memory unit, mapping information between an identifier of the write data and the physical write location.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: November 22, 2022
    Assignee: KIOXIA CORPORATION
    Inventor: Daisuke Hashimoto
  • Patent number: 11494485
    Abstract: A uniform enclave interface is provided for creating and operating enclaves across multiple different types of backends and system configurations. For instance, an enclave manager may be created in an untrusted environment of a host computing device. The enclave manager may include instructions for creating one or more enclaves. An enclave may be generated in memory of the host computing device using the enclave manager. One or more enclave clients of the enclave may be generated by the enclave manager such that the enclave clients configured to provide one or more entry points into the enclave. One or more trusted application instances may be created in the enclave.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Matthew Gingell, Peter Gonda, Alexander Thomas Cope, Sergey Karamov, Keith Moyer, Uday Savagaonkar, Chong Cai
  • Patent number: 11494301
    Abstract: A storage system in one embodiment comprises storage nodes, an address space, address mapping sub-journals and write cache data sub-journals. Each address mapping sub-journal corresponds to a slice of the address space, is under control of one of the storage nodes and comprises update information corresponding to updates to an address mapping data structure. Each write cache data sub journal is under control of the one of the storage nodes and comprises data pages to be later destaged to the address space. A given storage node is configured to store write cache metadata in a given address mapping sub journal that is under control of the given storage node. The write cache metadata corresponds to a given data page stored in a given write cache data sub-journal that is also under control of the given storage node.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: November 8, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vladimir Shveidel, Lior Kamran
  • Patent number: 11481325
    Abstract: A system for managing a virtual machine is provided. The system includes a processor configured to initiate a session for accessing a virtual machine by accessing an operating system image from a system disk and monitor read and write requests generated during the session. The processor is further configured to write any requested information to at least one of a memory cache and a write back cache located separately from the system disk and read the operating system image content from at least one of the system disk and a host cache operably coupled between the system disk and the at least one processor. Upon completion of the computing session, the processor is configured to clear the memory cache, clear the write back cache, and reboot the virtual machine using the operating system image stored on the system disk or stored in the host cache.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: October 25, 2022
    Assignee: Citrix Systems, Inc.
    Inventors: Yuhua Lu, Graham MacDonald, Lanyue Xu, Roger Cruz
  • Patent number: 11474941
    Abstract: A computer-implemented method, according to one approach, includes: receiving a stream of incoming I/O requests, all of which are satisfied using one or more buffers in a primary cache. However, in response to determining that the available capacity of the one or more buffers in the primary cache is outside a predetermined range: one or more buffers in the secondary cache are allocated. These one or more buffers in the secondary cache are used to satisfy at least some of the incoming I/O requests, while the one or more buffers in the primary cache are used to satisfy a remainder of the incoming I/O requests. Moreover, in response to determining that the available capacity of the one or more buffers in the primary cache is not outside the predetermined range: the one or more buffers in the primary cache are again used to satisfy all of the incoming I/O requests.
    Type: Grant
    Filed: March 9, 2020
    Date of Patent: October 18, 2022
    Assignee: International Business Machines Corporation
    Inventors: Beth Ann Peterson, Kevin J. Ash, Lokesh Mohan Gupta, Warren Keith Stanley, Roger G. Hathorn
  • Patent number: 11455251
    Abstract: A system-on-chip with runtime global push to persistence includes a data processor having a cache, an external memory interface, and a microsequencer. The external memory interface is coupled to the cache and is adapted to be coupled to an external memory. The cache provides data to the external memory interface for storage in the external memory. The microsequencer is coupled to the data processor. In response to a trigger signal, the microsequencer causes the cache to flush the data by sending the data to the external memory interface for transmission to the external memory.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: September 27, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alexander J. Branover, Kevin M. Lepak, William A. Moyes
  • Patent number: 11425223
    Abstract: A computer-implemented method, operable with a content delivery network (CDN) uses late binding of caching policies; by a caching node in the CDN, in response to a request for content, determining if the content is cached locally. When it is determined that said content is cached locally, then: determining a current cache policy associated with the content; and then determining, based on said current cache policy associated with the content, whether it is acceptable to serve the content that is cached locally; based on said determining, when it is not acceptable to serve the content that is cached locally, obtaining a new version of the content and then serving the new version of the content, otherwise when it is acceptable to serve the content that is cached locally, serving the content that is cached locally.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: August 23, 2022
    Assignee: Level 3 Communications, LLC
    Inventors: Christopher Newton, William Crowder
  • Patent number: 11422811
    Abstract: A processor includes a global register to store a value of an interrupted block count. A processor core, communicably coupled to the global register, may, upon execution of an instruction to flush blocks of a cache that are associated with a security domain: flush the blocks of the cache sequentially according to a flush loop of the cache; and in response to detection of a system interrupt: store a value of a current cache block count to the global register as the interrupted block count; and stop execution of the instruction to pause the flush of the blocks of the cache. After handling of the interrupt, the instruction may be called again to restart the flush of the cache.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: August 23, 2022
    Assignee: Intel Corporation
    Inventors: Gideon Gerzon, Dror Caspi, Arie Aharon, Ido Ouziel
  • Patent number: 11422895
    Abstract: A memory system may include: a nonvolatile memory device including a plurality of memory blocks, each of which includes a plurality of pages, and among which a subset of memory blocks are managed as a system area and remaining memory blocks are managed as a normal area; and a controller may store system data, used to control the nonvolatile memory device, in the system area, and storing boot data, used in a host and normal data updated in a control operation for the nonvolatile memory device, in the normal area, the controller may perform a checkpoint operation each time storage of N number of boot data among the boot data is completed, and may perform the checkpoint operation each time the control operation for the nonvolatile memory device is completed, ā€˜Nā€™ being a natural number.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: August 23, 2022
    Assignee: SK hynix Inc.
    Inventor: Jong-Min Lee
  • Patent number: 11409445
    Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for managing a storage system. A method for managing a storage system is provided. The method includes storing a data block to be backed up into a local storage device of the storage system; determining whether the data block includes periodically rewritten data based on historical operation information of the storage system, the historical operation information being associated with storage operations and removal operations by the storage system on historical data; and if it is determined that the data block does not include periodically rewritten data, storing the data block into a remote storage device of the storage system, and removing the data block from the local storage device.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: August 9, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Yi Wang, Qingxiao Zheng, Qianyun Cheng
  • Patent number: 11397677
    Abstract: One embodiment can provide an apparatus. The apparatus can include a persistent flush (PF) cache and a PF-tracking logic coupled to the PF cache. The PF-tracking logic is to: in response to receiving, from a media controller, an acknowledgment to a write request, determine whether the PF cache includes an entry corresponding to the media controller; in response to the PF cache not including the entry corresponding to the media controller, allocate an entry in the PF cache for the media controller; in response to receiving a persistence checkpoint, identify a media controller from a plurality of media controllers based on entries stored in the PF cache; issue a persistent flush request to the identified media controller to persist write requests received by the identified media controller; and remove an entry corresponding to the identified media controller from the PF cache subsequent to issuing the persistent flush request.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: July 26, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Derek A. Sherlock, Gregg B. Lesartre
  • Patent number: 11372757
    Abstract: Tracking repeated reads to guide dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) and a central ordering point circuit (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation. The COP's selection is based on protocol preference indicators generated by the PEs using repeat-read indicators that each PE maintains to track whether a coherence granule was repeatedly read by the PE (e.g., as a result of polling reads, or as a result of re-reading the coherence granule after it was evicted from a cache due to an invalidating snoop). After selecting the cache coherence protocol, the COP sends a response message to the PEs indicating the selected cache coherence protocol.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Neal Magill, Eric Francis Robinson, Derek Bachand, Jason Panavich, Michael B. Mitchell, Michael P. Wilson
  • Patent number: 11343348
    Abstract: This patent document describes technology for providing real-time messaging and entity update services in a distributed proxy server network, such as a CDN. Uses include distributing real-time notifications about updates to data stored in and delivered by the network, with both high efficiency and locality of latency. The technology can be integrated into conventional caching proxy servers providing HTTP services, thereby leveraging their existing footprint in the Internet, their existing overlay network topologies and architectures, and their integration with existing traffic management components.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: May 24, 2022
    Assignee: Akamai Technologies, Inc.
    Inventors: Matthew J. Stevens, Michael G. Merideth, Nil Alexandrov, Andrew F. Champagne, Brendan Coyle, Timothy Glynn, Mark A. Roman, Xin Xu
  • Patent number: 11310550
    Abstract: Techniques and mechanisms described herein facilitate the storage of digital media recordings. According to various embodiments, a system is provided comprising a processor, a storage device, Random Access Memory (RAM), an archive writer, and a recording writer. The archive writer is configured to retrieve a plurality of small multimedia segments (SMSs) in RAM and write the plurality of SMSs into an archive container file in RAM. The single archive container file may correspond to a singular multimedia file when complete. New SMSs retrieved from RAM are appended into the archive container file if the new SMSs also correspond to the singular multimedia file. The recording writer is configured to flush the archive container file to be stored as a digital media recording on the storage device once enough SMSs have been appended by the archive writer to the archive container file to complete the singular multimedia file.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: April 19, 2022
    Assignee: TIVO CORPORATION
    Inventors: Do Hyun Chung, Ren L. Long, Dan Dennedy
  • Patent number: 11301380
    Abstract: Exemplary methods, apparatuses, and systems include identifying that a first cache line from a first cache is subject to an operation that copies data from the first cache to a non-volatile memory. A first portion of the first cache line stores clean data and a second portion of the first cache line stores dirty data. A redundant copy of the dirty data is stored in a second cache line of the first cache. In response to identifying that the first cache line is subject to the operation, metadata associated with the redundant copy of the dirty data is used to copy the dirty data to a non-volatile memory while omitting the clean data.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: April 12, 2022
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Robert M. Walker, Ashay Narsale
  • Patent number: 11288191
    Abstract: An apparatus to facilitate memory flushing is disclosed. The apparatus comprises a cache memory, one or more processing resources, tracker hardware to dispatch workloads for execution at the processing resources and to monitor the workloads to track completion of the execution, range based flush (RBF) hardware to process RBF commands and generate a flush indication to flush data from the cache memory and a flush controller to receive the flush indication and perform a flush operation to discard data from the cache memory at an address range provided in the flush indication.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: March 29, 2022
    Assignee: Intel Corporation
    Inventors: Hema Chand Nalluri, Aditya Navale, Altug Koker, Brandon Fliflet, Jeffery S. Boles, James Valerio, Vasanth Ranganathan, Anirban Kundu, Pattabhiraman K
  • Patent number: 11288206
    Abstract: Embodiment of this disclosure provide techniques to support memory paging between trust domains (TDs) in computer systems. In one embodiment, a processing device including a memory controller and a memory paging circuit is provided. The memory paging circuit is to insert a transportable page into a memory location associated with a trust domain (TD), the transportable page comprises encrypted contents of a first memory page of the TD. The memory paging circuit is further to create a third memory page associated with the TD by binding the transportable page to the TD, binding the transportable page to the TD comprises re-encrypting contents of the transportable page based on a key associated with the TD and a physical address of the memory location. The memory paging circuit is further to access contents of the third memory page by decrypting the contents of the third memory page using the key associated with the TD.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 29, 2022
    Assignee: Intel Corporation
    Inventors: Hormuzd M. Khosravi, Baiju Patel, Ravi Sahita, Barry Huntley
  • Patent number: 11287986
    Abstract: Apparatus and methods are disclosed, including a controller circuit, a volatile memory, a non-volatile memory, and a reset circuit, where the reset circuit is configured to receive a reset signal from a host device and actuate a timer circuit. The timer circuit, where the timer circuit is configured to cause a storage device to reset after a threshold time period. The reset circuit is further configured to actuate the controller circuit to write data stored in the volatile memory to the non-volatile memory before the storage device is reset.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: March 29, 2022
    Assignee: Micron Technology, Inc.
    Inventor: David Aaron Palmer
  • Patent number: 11281593
    Abstract: Provided are a computer program product, system, and method for using insertion points to determine locations in a cache list at which to indicate tracks in a shared cache accessed by a plurality of processors. A plurality of insertion points to a cache list for the shared cache having a least recently used (LRU) end and a most recently used (MRU) end identify tracks in the cache list. For each processor, of a plurality of processors, for which indication of tracks accessed by the processor is received, a determination is made of insertion points of the provided insertion points at which to indicate the tracks for which indication is received. The tracks are indicated at positions in the cache list with respect to the determined insertion points.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 11265395
    Abstract: Systems, methods, and software for operating a content delivery system to purge cached content are provided herein. In one example, purge messages are transferred for delivery to content delivery nodes (CDNs) in the content delivery system. The CDNs receive the messages, purge content associated with the messages, and compile purge summaries based on the messages. The CDNs further periodically transfer the purge summaries to one another to compare the messages received, and gather purge information for purge messages that may have been inadvertently missed by the CDNs.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: March 1, 2022
    Assignee: Fastly, Inc.
    Inventors: Bruce Spang, Tyler B. McMullen
  • Patent number: 11231855
    Abstract: A storage controller is configured to perform a full stride destage, a strip destage, and an individual track destage. A machine learning module receives a plurality of inputs corresponding to a plurality of factors that affect performance of data transfer operations and preservation of drive life in the storage controller. In response to receiving the inputs, the machine learning module generates a first output, a second output, and a third output that indicate a preference measure for the full stride destage, the strip destage, and the individual track destage respectively.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: January 25, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh Mohan Gupta, Clint A. Hardy, Karl Allen Nielsen, Brian Anthony Rinaldi
  • Patent number: 11226744
    Abstract: A first score corresponding to a full stride destage, a second score corresponding to a strip destage, and a third score corresponding to an individual track destage are computed, wherein the first score, the second score, and the third score are computed for a group of Input/Output (I/O) operations based on a first metric and a second metric, wherein the first metric is configured to affect a performance of data transfers, and wherein the second metric is configured to affect a drive life. A determination is made of a type of destage to perform based on the first score, the second score, and the third score.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: January 18, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Clint A. Hardy, Lokesh Mohan Gupta, Karl Allen Nielsen, Brian Anthony Rinaldi
  • Patent number: 11204878
    Abstract: An apparatus is provided that includes a memory hierarchy comprising a plurality of caches and a memory. Prefetch circuitry acquires data from the memory hierarchy before the data is explicitly requested by processing circuitry configured to execute a stream of instructions. Writeback circuitry causes the data to be written back from a higher level cache of the memory hierarchy to a lower level cache of the memory hierarchy and tracking circuitry tracks a proportion of entries that are stored in the lower level cache of the memory hierarchy having been written back from the higher level cache of the memory hierarchy, that are subsequently explicitly requested by the processing circuitry in response to one of the instructions.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: December 21, 2021
    Assignee: Arm Limited
    Inventors: Joseph Michael Pusdesris, Chris Abernathy
  • Patent number: 11204696
    Abstract: Memory devices including a hybrid cache, methods of operating a memory device, and associated electronic systems including a memory device having a hybrid cache, are disclosed. The hybrid cache includes a dynamic cache that may include x-level cell (XLC) blocks of non-volatile memory cells, which may include multi-level cells (MLC), triple-level cells (TLC), quad-level cells (QLC), etc., shared between the dynamic cache and a main memory. The hybrid cache includes a static cache including single-level cell (SLC) blocks of non-volatile memory cells. The memory device further includes a memory controller configured to disable at least one of the static cache and the dynamic cache based on a workload of the hybrid cache relative to a Total Bytes Written (TBW) Spec for the memory device. The cache may be disabled based on, for example, program/erase (PE) cycles of one or more portions of the memory device or the workload exceeding a threshold, which may define one or more switch points.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: December 21, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Kishore K. Muchherla, Ashutosh Malshe, Sampath K. Ratnam, Peter Feeley, Michael G. Miller, Christopher S. Hale, Renato C. Padilla
  • Patent number: 11200174
    Abstract: Provided are a computer program product, system, and method for considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage. One of a plurality of densities for one of a plurality of groups of tracks is incremented in response to determining at least one of that the group is not ready to destage and that one of the tracks in the group in the cache transitions to being ready to destage. A determination is made of a group frequency indicating a frequency at which tracks in the group are modified. At least one of the density and the group frequency is used for each of the groups to determine whether to destage the group. The tracks in the group in the cache are destaged to the storage in response to determining to destage the group.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: December 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 11200178
    Abstract: An operation method of a memory system includes: searching for a valid physical address in memory map segments stored in the memory system, based on a read request from a host, a logical address corresponding to the read requests, and a physical address corresponding to the logical address and performing a read operation corresponding to the read request; caching some of the memory map segments in the host as host map segments based on a read count threshold indicating the number of receptions of the read request for the logical address; and adjusting the read count threshold based on a miss count indicating the number of receptions of the read request with no physical address, and a provision count indicating the number of times the memory map segment is cached in the host.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: December 14, 2021
    Assignee: SK hynix Inc.
    Inventor: Eu-Joon Byun
  • Patent number: 11194730
    Abstract: A method for depopulating data from cache includes receiving a command to depopulate the cache of selected data. The command has an application identifier as a parameter. The application identifier is associated with an application that previously accessed the data. The method searches the cache for data elements that are marked with the application identifier and removes the data elements from the cache. In certain embodiments, the data elements are marked with a first application identifier associated with an application that staged the data elements into the cache, and a second application identifier associated with an application that last accessed the data elements. In certain embodiments, removing the data elements from the cache comprises only removing the data elements from the cache if the application identifier matches one or more of the first application identifier and the second application identifier. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 9, 2020
    Date of Patent: December 7, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kyler A. Anderson, Beth A. Peterson
  • Patent number: 11188473
    Abstract: A memory device includes a page cache comprising a cache register, a memory array configured with a plurality of memory planes, and control logic, operatively coupled with the memory array. The control logic receives, from a requestor, a first cache read command requesting first data from the memory array spread across the plurality of memory planes, and returns, to the requestor, data associated with a first subset of the plurality of memory planes and pertaining to a previous read command, while concurrently copying data associated with a second subset of the plurality of memory planes and pertaining to the previous read command into the cache register.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: November 30, 2021
    Assignee: MICRON TECHNOLOGY, INC.
    Inventors: Eric N. Lee, Yoav Weinberg
  • Patent number: 11188465
    Abstract: A cache memory is disclosed. The cache memory includes a plurality of ways, each way including an instruction memory portion, where the instruction memory portion includes a plurality of instruction memory locations configured to store instruction data encoding a plurality of CPU instructions. The cache memory also includes a controller configured to determine that each of a predetermined number of cache memory hit conditions have occurred, and a replacement policy circuit configured to identify one of the plurality of ways as having experienced a fewest quantity of hits of the predetermined number of cache memory hit conditions, where the controller is further configured to determine that a cache memory miss condition has occurred, and, in response to the miss condition, to cause instruction data retrieved from a RAM memory to be written to the instruction memory portion of the way identified by the replacement policy circuit.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: November 30, 2021
    Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.
    Inventor: Bassam S Kamand
  • Patent number: 11188458
    Abstract: The memory controller controls at least one memory device including a plurality of stream storage areas. The memory controller comprises a buffer, a write history manager, a write controller, and a garbage collection controller. The buffer stores write data. The write history manager stores write count values for each of the plurality of stream storage areas and generates write history information indicating a write operation frequency for each of the plurality of stream storage areas based on the write count values. The write controller controls the at least one memory device to store the write data provided from the buffer. The garbage collection controller controls the at least one memory device to perform a garbage collection operation on a target stream storage area selected from among the plurality of stream storage areas based on the write history information.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: November 30, 2021
    Assignee: SK hynix Inc.
    Inventors: Hee Chan Shin, Yong Seok Oh, Ju Hyun Kim, Jin Yeong Kim
  • Patent number: 11178246
    Abstract: The disclosed embodiments disclose techniques for managing cloud-based storage using a time-series database. A distributed cloud data management system (DCDMS) manages objects stored in a cloud storage system. The DCDMS leverages a distributed time-series database to track objects accessed via the DCDMS. During operation, the DCDMS receives a request to access an object using a path identifier and an object identifier. The DCDMS determines from the path identifier that the request is associated with one of its supported extended capabilities, and uses the previously tracked object operations that are stored in the time-series database to determine the actual target bucket in the cloud storage system that contains the requested object; the target bucket that contains the object may be different from the bucket identified in the path identifier that is received. The object identifier is then used to access the requested object from the target bucket to service the request.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: November 16, 2021
    Assignee: Panzura, LLC
    Inventors: Jian Xing, Qian Zhang, Pu Paul Zhang
  • Patent number: 11163656
    Abstract: Techniques for implementing high availability for persistent memory are provided. In one embodiment, a first computer system can detect an alternating current (AC) power loss/cycle event and, in response to the event, can save data in a persistent memory of the first computer system to a memory or storage device that is remote from the first computer system and is accessible by a second computer system. The first computer system can then generate a signal for the second computer system subsequently to initiating or completing the save process, thereby allowing the second computer system to restore the saved data from the memory or storage device into its own persistent memory.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: November 2, 2021
    Assignee: VMware, Inc.
    Inventors: Pratap Subrahmanyam, Rajesh Venkatasubramanian, Kiran Tati, Qasim Ali
  • Patent number: 11157410
    Abstract: One embodiment includes a system comprising a repository configured to store objects, an object cache configured to cache objects retrieved from the repository by a node, a memory configured to store a broadcast cache invalidation queue accessible by a plurality of nodes and an invalidation status, a processor coupled to the memory and a computer readable medium storing computer-executable instructions.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: October 26, 2021
    Assignee: OPEN TEXT SA ULC
    Inventors: Michael Gerard Jaskiewicz, Sarah Barnes Atlas, Mukesh Chowdhary, Lloyd Douglas Forrest
  • Patent number: 11153231
    Abstract: An apparatus and method are provided for processing flush requests within a packet network. The apparatus comprises a requester device within the packet network arranged to receive a flush request generated by a remote agent requesting that one or more data items be flushed to a point of persistence. The requester device translates the flush request into a packet-based flush command conforming to a packet protocol of the packet network. A completer device within the packet network that is coupled to a persistence domain incorporating the point of persistence is arranged to detect receipt of the packet-based flush command, and then trigger a flush operation within the persistence domain to flush the one or more data items to the point of persistence. This provides a fast, hardware-based, mechanism for performing a flush operation within a persistence domain without needing to trigger software in the persistence domain to handle the flush to the point of persistence.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: October 19, 2021
    Assignee: Arm Limited
    Inventors: Tessil Thomas, Andrew Joseph Rushing
  • Patent number: 11144478
    Abstract: An operation method of a memory system includes: searching for a valid physical address in memory map segments stored in the memory system, based on a read request from a host, a logical address corresponding to the read requests, and a physical address corresponding to the logical address and performing a read operation corresponding to the read request; caching some of the memory map segments in the host as host map segments based on a read count threshold indicating the number of receptions of the read request for the logical address; and adjusting the read count threshold based on a miss count indicating the number of receptions of the read request with no physical address, and a provision count indicating the number of times the memory map segment is cached in the host.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: October 12, 2021
    Assignee: SK hynix Inc.
    Inventor: Eu-Joon Byun
  • Patent number: 11138122
    Abstract: A method, computer program product, and computer system for identifying a first node that has written a first page of a plurality of pages to be flushed. A second node that has written a second page of the plurality of pages to be flushed may be identified. It may be determined whether the first page of the plurality of pages is to be flushed by one of the first node and the second node and whether the second page of the plurality of pages is to be flushed by one of the first node and the second node based upon, at least in part, one or more factors. The first node may allocate the first page of the plurality of pages and the second page of the plurality of pages to be flushed in parallel by one of the first node and the second node based upon, at least in part, the one or more factors.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: October 5, 2021
    Assignee: EMC IP Holding Company, LLC
    Inventors: Socheavy D. Heng, Steven A. Morley
  • Patent number: 11119945
    Abstract: A system of handling electronic information having a virtually tagged cache having a directory and a plurality of entries containing data, the directory containing multiple entries, each entry configured to comprise at least a virtual address and one of a plurality of context tags, wherein each context tag is an encoding for one of a plurality of layers of address space; a context tag table having a plurality of entries, each entry configured to map one of the plurality of context tags to one of the plurality of layers of space; and a scratch register containing a current context tag for a current layer of address space on which the processor is operating. The virtually tagged cache is configured to preserve information in the virtually tagged cache when performing a context switch in the system.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: September 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: Jake Truelove, David Campbell, Bryan Lloyd
  • Patent number: 11101929
    Abstract: A method for execution by a computing device includes, receiving, from a requesting device, a request for a data segment of a data object that is or is to be stored in storage units of a content delivery network. The method further includes determining whether the data segment is stored in a cache memory of the content delivery network or in the storage units. When stored in the cache memory, the method includes retrieving the cached data segment, and sending it to the requesting device. When stored in the storage units, the method includes, sending read requests regarding the data segment to the storage units, receiving, in response to the read requests, at least a decode threshold number of encoded data slices, decoding the at least the decode threshold number of encoded data slices to reproduce the data segment, and sending the data segment to the requesting device.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: August 24, 2021
    Assignee: PURE STORAGE, INC.
    Inventors: S. Christopher Gladwin, Timothy W. Markison, Greg Dhuse, Thomas Franklin Shirley, Jr., Wesley Leggette, Jason K. Resch, Gary W. Grube
  • Patent number: 11074004
    Abstract: An embodiment of a semiconductor apparatus may include technology to segregate a persistent storage media into two or more segments, and collect telemetry information on a per segment-basis, wherein a segment granularity is smaller than a namespace granularity. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: July 27, 2021
    Assignee: Intel Corporation
    Inventors: Jason Casmira, Jawad Khan, David Minturn
  • Patent number: 11074194
    Abstract: Managing direct memory access (DMA) by: defining a translate control entity (TCE) cache flag for cache memory addresses, receiving a DMA TCE related request, checking the TCE cache flag status, and completing the TCE related request according to the TCE cache flag status.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: July 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Sakethan Reddy Kotta, Eric Norman Lais, Rama Krishna Hazari, Kumaraswamy Sripathy
  • Patent number: 11036875
    Abstract: Techniques for instantiating an enclave from dependent enclave images are presented. The techniques include identifying a first set of dependent enclave indicators from a primary enclave image, identifying a first dependent enclave image corresponding to one of the first set of dependent enclave indicators, creating a secure enclave container, and copying at least a portion of the primary enclave image and at least a portion of the first dependent enclave image into the secure enclave container.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: June 15, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Manuel Costa
  • Patent number: 11023375
    Abstract: Described is a data cache implementing hybrid writebacks and writethroughs. A processing system includes a memory, a memory controller, and a processor. The processor includes a data cache including cache lines, a write buffer, and a store queue. The store queue writes data to a hit cache line and an allocated entry in the write buffer when the hit cache line is initially in at least a shared coherence state, resulting in the hit cache line being in a shared coherence state with data and the allocated entry being in a modified coherence state with data. The write buffer requests and the memory controller upgrades the hit cache line to a modified coherence state with data based on tracked coherence states. The write buffer retires the data upon upgrade. The data cache writebacks the data to memory for a defined event.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: June 1, 2021
    Assignee: SiFive, Inc.
    Inventors: John Ingalls, Wesley Waylon Terpstra, Henry Cook
  • Patent number: 11010100
    Abstract: An asynchronous storage system may perform asynchronous writing of data from different sets of received non-consecutive synchronous write requests based on a dynamic write threshold that varies according to parameters of the storage device and/or synchronous write request patterns. The asynchronous writing may include coalescing data from a set of non-consecutive write requests in a plurality of received write requests that contain different data for a particular file, issuing a single asynchronous write request with the data that is coalesced from each write request of the set of non-consecutive write requests to the storage device instead of each write request of the set of non-consecutive write requests, and writing the data that is coalesced from each write request of the set of non-consecutive write requests to the storage device with a single write operation that is executed in response to the single asynchronous write request.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: May 18, 2021
    Assignee: Open Drives LLC
    Inventors: Scot Gray, Sean Lee
  • Patent number: 10990537
    Abstract: A memory system and method for storing data in one or more storage chips includes: one or more memory cards each having a plurality of storage chips, and each chip having a plurality of dies having a plurality of memory cells; a memory controller comprising a translation module, the translation module further comprising: a logical to virtual translation table (LVT) having a plurality of entries, each entry in the LVT configured to map a logical address to a virtual block address (VBA), where the VBA corresponds to a group of the memory cells on the one or more memory cards, wherein each entry in the LVT further includes a write wear level count to track the number of writing operations to the VBA, and a read wear level count to track the number of read operations for the VBA mapped to that LVT entry.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: April 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Daniel Frank Moertl, Damir Anthony Jamsek, Andrew Kenneth Martin, Charalampos Pozidis, Robert Edward Galbraith, Jeremy T. Ekman, Abby Harrison, Gerald Mark Grabowski, Steven Norgaard
  • Patent number: 10972539
    Abstract: This application relates to apparatus and methods for communication with and management of datacenters, such as cloud datacenters employing multiple servers. A control server may identify a plurality of datacenters from which to request block storage status. The control server may identify a user request to execute multiple requests to obtain the block storage status from the plurality of datacenters. Based on the user request, the control server may generate the plurality of requests. The control server may transmit the plurality of requests to the plurality of datacenters. The control server may determine if a response to the requests is received. The response may include block storage status data identifying whether a service managing storage blocks for the datacenter is operational. The control server may also provide the block storage status for display.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: April 6, 2021
    Assignee: Walmart Apollo, LLC
    Inventors: Gerald Bothello, Surajit Roy, Giridhar Bhujanga
  • Patent number: 10970222
    Abstract: An indication to perform an eviction operation on a cache line in a cache can be received. A determination can be made as to whether at least one sector of the cache line is associated with invalid data. In response to determining that at least one sector of the cache line is associated with invalid data, a read operation can be performed to retrieve valid data associated with the at least one sector. The at least one sector of the cache line that is associated with the invalid data can be modified based on the valid data. Furthermore, the eviction operation can be performed on the cache line with the modified at least one sector.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: April 6, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Dhawal Bavishi, Robert M. Walker