Abstract: An apparatus in an illustrative embodiment comprises at least one processing device comprising a processor and a memory, with the processor coupled to the memory. The at least one processing device is configured to receive in a storage system, from a host device, information that identifies (i) a particular virtual machine implemented by the host device and (ii) a key specific to the virtual machine, to utilize at least a portion of the received information to obtain in the storage system the key specific to the virtual machine from a key management server external to the storage system, to store the obtained key in the storage system in association with one or more parts of the received information, and to utilize the obtained key to process input-output operations that are received in the storage system from the host device and that are identified as being associated with the virtual machine.
Abstract: A storage system and method for automatic data phasing are disclosed. In one embodiment, a storage system is configured to receive, from a host, data to be written in the memory and an indication of an expected lifespan of the data; and determine whether to perform a garbage collection operation on the data based on the expected lifespan of the data. Other embodiments are provided.
Abstract: Data caching may include storing data associated with DRAM transaction requests in data storage structures organized in a manner corresponding to the DRAM bank, bank group and rank organization. Data may be selected for transfer to the DRAM by selecting among the data storage structures.
Abstract: A system and method are provided for simplifying load acquire and store release semantics that are used in reduced instruction set computing (RISC). Translating the semantics into micro-operations, or low-level instructions used to implement complex machine instructions, can avoid having to implement complicated new memory operations. Using one or more data memory barrier operations in conjunction with load and store operations can provide sufficient ordering as a data memory barrier ensures that prior instructions are performed and completed before subsequent instructions are executed.
Abstract: A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.
Abstract: Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for data processing. According to an exemplary implementation of the present disclosure, a method for data processing includes: determining a type of target data associated with an access request, the type including at least one of: computation data type, recovery data type, and hot data type; selecting, based on the type, a target access mode of a storage device associated with the target data from a direct access mode and a block device mode; and causing the storage device to access the target data in the target access mode. As a result, good management of quality of service can be achieved for storage devices.
Abstract: A method includes receiving, by a processing device, signaling indicative of a power cycle (PC) to a memory device (MD) having a first signal indicative of a Power On Operation and a second signal indicative of a Power Off Operation, and determining an Average Power On Time (APOT) of the MD based, at least in part, on a quantity of power cycles (n) to the MD over a predetermined time interval (PTI), and for each PC over the PTI, an amount of time between receipt of the first signal and the second signal. A sum of each of the amount of time between receipt of the first signal and the second signal in the PTI provides a total power on time (T) to the MD, and the APOT is equal to T/n. When the APOT is less than (<) a threshold APOT value, determining a frequency at which to perform media scan operations and performing media scan operations involving the MD at the determined frequency.
Abstract: Various implementations described herein relate to systems and methods for managing metadata for an atomic write operation, including determining metadata for data, queuing the metadata in an atomic list, in response to determining that atomic commit has occurred, moving the metadata from the atomic list to write lookup lists based on logical information of the data, and determining one of metadata pages of a non-volatile memory for each of the write lookup lists based on the logical information.
Abstract: Techniques for improving the read performance of an LFS-based storage system that supports COW snapshotting are provided. In one set of embodiments, the storage system can implement an intermediate map for each storage object in the system that is keyed by a composite key consisting of snapshot identifier (major key) and LBA (minor key). With this approach, contiguous logical block addresses (LBAs) of a storage object or its snapshots will map to contiguous <Snapshot ID, LBA>-to-PBA mappings in the storage object's intermediate map, resulting in good spatial locality for those LBAs and robust read performance.
Type:
Grant
Filed:
May 24, 2021
Date of Patent:
September 27, 2022
Assignee:
VMware, Inc.
Inventors:
Abhay Kumar Jain, Sriram Patil, Wenguang Wang, Enning Xiang, Asit A. Desai
Abstract: Techniques are provided for persistent hole reservation. For example, hole reservation flags of operations targeting a first storage object of a first node are replicated into replication operations targeting a second storage object of a second node during a transition operation to transition the first storage object and the second storage object from an asynchronous replication state to a synchronous replication state. In another example, the second storage object is grown to a size of a replication punch hole operation that failed due to targeting a file block number greater than an end of size of the second storage object.
Type:
Grant
Filed:
November 30, 2020
Date of Patent:
September 20, 2022
Assignee:
NetApp Inc.
Inventors:
Krishna Murthy Chandraiah Setty Narasingarayanapeta, Rakesh Bhargava M. R.
Abstract: A host server in a server cluster has a memory allocator that creates a dedicated host application data cache in storage class memory. A background routine destages host application data from the dedicated cache in accordance with a destaging plan. For example, a newly written extent may be destaged based on aging. All extents may be flushed from the dedicated cache following host server reboot. All extents associated with a particular production volume may be flushed from the dedicated cache in response to a sync message from a storage array.
Type:
Grant
Filed:
October 10, 2019
Date of Patent:
September 20, 2022
Assignee:
EMC IP Holding Company LLC
Inventors:
Arieh Don, Adnan Sahin, Owen Martin, Peter Blok, Philip Derbeko
Abstract: A semiconductor device includes a plurality of cores, each including an instruction execution circuit and a first cache, and a second cache shared by the plurality of cores. In each of the cores, a number of completed instructions for each type of the instructions executed by the instruction execution circuit are counted, and an execution frequency for each type of instructions are calculated. Based on the execution frequencies, a cache line size preferable for use in the first cache in the core is selected. Based on the selected preferable cache line sizes for the cores, a cache line size used in the first caches and the second cache is determined.
Abstract: A cache memory includes a first cache area corresponding to even addresses, and a second cache area corresponding to odd addresses, wherein each of the first and second cache areas includes a plurality of cache sets, and each cache set includes a data set field suitable for storing data corresponding to an address among the even and odd addresses, and a pair field suitable for storing information on a location where data corresponding to an adjacent address which is adjacent to an address corresponding to the stored data is stored.
Abstract: A processing-in-memory includes: a memory; a register configured to store offset information; and an internal processor configured to: receive an instruction and a reference physical address of the memory from a memory controller, determine an offset physical address of the memory based on the offset information, determine a target physical address of the memory based on the reference physical address and the offset physical address, and perform the instruction by accessing the target physical address.
Abstract: One example method includes identifying a group of asset backups to be performed, and each asset backup is associated with a respective asset and has an associated backup time and RPO, selecting an asset backup to run first, and the asset backup that will run first is chosen based on a start deadline of that asset backup relative to respective start deadlines of one or more other asset backups, and the start deadline falls within a time slot, selecting a stream from a group of streams for the selected asset backup, and the selected stream is a stream with a lowest value of first available time slot, and backing up the asset at a backup server by running the selected asset backup, and backup begins at a start time that is a time when the selected stream becomes available, and the asset backup runs on the selected stream.
Type:
Grant
Filed:
May 28, 2020
Date of Patent:
September 6, 2022
Assignee:
EMC IP HOLDING COMPANY LLC
Inventors:
Tiago Salviano Calmon, Hugo de Oliveira Barbalho, Eduardo Vera Sousa
Abstract: A data storage device includes a storage, a buffer memory, and a controller. The controller is configured to control at least one of an input of data to and an output of data from the storage in response to a request transmitted from a host device. The controller is also configured to receive write data transmitted from the host device and cached in the buffer memory, encrypt the write data, and store the encrypted write data in the storage. The controller is further configured to receive read data read from the storage and cached in the buffer memory, decrypt the read data, and provide the decrypted read data to the host device.
Type:
Grant
Filed:
July 16, 2019
Date of Patent:
August 23, 2022
Assignee:
SK hynix Inc.
Inventors:
Hyung Min Kim, Do Hun Kim, Jae Han Park
Abstract: There is provided a data processing apparatus comprising table circuitry to store a table that indicates, for a program counter value of an instruction that performs a memory access operation at a memory address, one or more offsets of the memory address and an associated confidence for each of the one or more offsets. Prefetch circuitry prefetches data based on each of the offsets in dependence on the associated confidence. Each of the offsets of the memory address is dynamically determined.
Type:
Grant
Filed:
January 15, 2020
Date of Patent:
August 16, 2022
Assignee:
Arm Limited
Inventors:
Joseph Michael Pusdesris, Alexander Cole Shulyak
Abstract: System and techniques for performing snapshot and backup copy operations for individual virtual machines in a shared storage. The system can also include one or more shared physical computer storage devices communicatively coupled to the hypervisor to store the plurality of virtual machines. A plurality of storage volumes can be provided in the one or more shared physical computer storage devices where each storage volume uniquely corresponding to one of the virtual machines. The system can issue a command to a hypervisor to perform a snapshot or backup copy operation with a particular information management policy.
Abstract: Techniques are provided for dynamic snapshot scheduling. In an example, a dynamic snapshot scheduler can analyze historical data about storage system resources. The dynamic snapshot scheduler can use this historical data to predict how the storage system resources will be used in the future. Based on this prediction, the dynamic snapshot scheduler can schedule snapshot activities for one or more times that are relatively unlikely to experience system resource contention. The dynamic snapshot scheduler can then initiate snapshot activities at those scheduled times.
Abstract: An information processing device including a receiver that receives a plurality of pieces of control information from another device, and a controller that adds, in order of reception, a plurality of pieces of control information received from another device to a predetermined storage area, and adds, to a first buffer area related to first control information stored in the predetermined storage area, partial data of first input data corresponding to the first control information, in accordance with order of reception of the partial data.