Abstract: An operation method of a storage device including first and second physical functions respectively corresponding to first and second hosts includes receiving performance information from each of the first and second hosts, setting a first weight value corresponding to the first physical function and a second weight value corresponding to the second physical function, based on the received performance information, selecting one of a first submission queue, a second submission queue, a third submission queue, and a fourth submission queue based on an aggregated value table, the first and second submission queues being managed by the first host and the third and fourth submission queues are managed by the second host, processing a command from the selected submission queue, and updating the aggregated value table based on a weight value corresponding to the processed command from among the first and second weights and input/output (I/O) information of the processed command.
Type:
Grant
Filed:
December 21, 2020
Date of Patent:
August 9, 2022
Assignee:
SAMSUNG ELECTRONICS CO., LTD.
Inventors:
Myung Hyun Jo, Youngwook Kim, Jinwoo Kim, Jaeyong Jeong
Abstract: A host interface layer in a storage device is described. The host interface layer may include an arbitrator to select a first submission queue (SQ) from a set including at least the first SQ and a second SQ. The first SQ may be associated with a first Quality of Service (QoS) level, and the second SQ may be associated with a second QoS level. A command fetcher may retrieve an input/output (I/O) request from the first SQ. A command parser may place the I/O request in a first command queue from a set including at least the first command queue and a second command queue. The arbitrator may be configured to select the first SQ based at least in part on a first weight associated with the first SQ and a second weight associated with the second SQ.
Type:
Grant
Filed:
March 1, 2021
Date of Patent:
August 9, 2022
Inventors:
Ramzi Ammari, Rajinikanth Pandurangan, Changho Choi, Zongwang Li
Abstract: A key-value storage architecture with data compression is shown. A computing unit is configured to estimate the average compression rate factor of a non-volatile memory. The computing unit is further configured to estimate storage space consumption of the non-volatile memory based on the average compression rate factor, and programming of the non-volatile memory is prohibited if to the storage space consumption exceeds a predefined threshold. The average compression rate factor is dynamically updated, and is a weighted result of compression rate factors of several storage units of the non-volatile memory.
Abstract: A method and system for managing a buffer device in a storage system. The method comprising determining a first priority for a first queue included in the buffer device, the first queue comprising at least one data page associated with a first storage device in the storage system; in at least one round, in response to the first priority not satisfying a first predetermined condition, updating the first priority according to a first updating rule, the first updating rule making the updated first priority much closer to the first predetermined condition than the first priority; and in response to the first priority satisfying the first predetermined condition, flushing data in a data page in the first queue to the first storage device.
Abstract: Systems, apparatuses, and methods related to managing memory objects are discussed. An example method can include monitoring a first characteristic set for each of a plurality of memory objects written to a first memory device or a second memory device; monitoring a second characteristic set for each of the plurality of memory objects; monitoring a performance characteristic set for the first memory device and the second memory device, wherein the first memory device and the second memory device comprise different types of memory media; and writing each of the plurality of memory objects in a particular respective location of the first memory device or the second memory device based, at least in part, upon the first characteristic set, the second characteristic set, and the performance characteristic set.
Abstract: A system and method for prolonging lifespans of storage drives. The method includes determining an expected expiration time for each of a plurality of blocks, wherein each block includes data of a respective file, wherein the expected expiration of each block is determined based on a file type of the respective file; and writing a portion of data to at least one block of the plurality of blocks based on the expected expiration time for each block.
Type:
Grant
Filed:
November 3, 2020
Date of Patent:
July 26, 2022
Assignee:
Vast Data Ltd.
Inventors:
Renen Hallak, Vladimir Zdornov, Yogev Vaknin, Asaf Levy, Alex Turin
Abstract: A method of operating a memory is provided. The method includes, in response to an access of a block of memory updating a first queue to identify the accessed block in response to a determination that the block is not already identified in the first queue and a determination that the block is not already identified in a second queue, and updating the second queue to identify the accessed block of memory in response to a determination that the block is already identified in the first queue. The method further includes scanning the second queue to identify, as a read setup candidate, each block of the memory that is identified as present in the second queue longer than a threshold, and performing a read setup operation on a block of memory that has been identified as the read setup candidate.
Abstract: Representative embodiments set forth herein disclose techniques for implementing improved links between paths of one or more file systems. According to some embodiments, techniques are disclosed for establishing a system volume and a data volume within a container. According to other embodiments, techniques are disclosed for establishing a link from a source path of a system volume within a container to a target path of a data volume within the container. According to yet other embodiments, techniques are disclosed for determining whether to allow a file system operation on a data volume of a container based on at least determining whether a target path is associated with a reference to a source path.
Type:
Grant
Filed:
May 20, 2020
Date of Patent:
July 12, 2022
Inventors:
Vivek Verma, Damien P. Sorresso, Pavel Sokolov, Pierre-Olivier J. Martel, Eric B. Tamura, Yoni Baron
Abstract: A storage system that supports multiple RAID levels presents storage objects with front-end tracks corresponding to back-end tracks on non-volatile drives and accesses the drives using a single type of back-end allocation unit that is larger than a back-end track. When the number of members of a protection group of a RAID level does not align with the back-end allocation unit, multiple back-end tracks are grouped and accessed using a single IO. The number of back-end tracks in a group is selected to align with the back-end allocation unit size. If the front-end tracks are variable size, then front-end tracks may be destaged into a smaller number of grouped back-end tracks in a single IO.
Type:
Grant
Filed:
April 8, 2021
Date of Patent:
June 28, 2022
Assignee:
Dell Products L.P.
Inventors:
Peng Wu, Rong Yu, Jiahui Wang, Lixin Pang
Abstract: A method for executing a hard disk operation command, a hard disk, and a storage medium. After an operation command is received, a target LUN in an idle state is determined; a target physical block that is to be accessed when the operation command is executed is determined from the target LUN; the operation command is stored in a processing waiting queue corresponding to a flash memory chip to which the target physical block belongs; and a working state of the target LUN is changed to a non-idle state when a quantity of operation commands that wait to be processed in a processing waiting queue respectively corresponding to each flash memory chip in the target LUN is greater than a preset threshold.
Abstract: Data units are stored in private caches in nodes of a multiprocessor system, each node containing at least one processor (CPU), at least one cache private to the node and at least one cache location buffer (CLB) private to the node. In each CLB location information values are stored, each location information value indicating a location associated with a respective data unit, wherein each location information value stored in a given CLB indicates the location to be either a location within the private cache disposed in the same node as the given CLB, to be a location in one of the other nodes, or to be a location in a main memory. Coherence of values of the data units is maintained using a cache coherence protocol. The location information values stored in the CLBs are updated by the cache coherence protocol in accordance with movements of their respective data units.
Abstract: An operating method of a data storage system comprising a processor and multiple storage devices, the operating method comprising: a first storage operation of selecting a first storage device, a second storage device, and a third storage device among the multiple storage devices and transmitting and storing data generated by the processor in the first storage device and the second storage device, a second storage operation of transmitting, to the third storage device, the data stored in the second storage device and compressing and storing the data in the third storage device, a first access operation of accessing the data in the first storage device, by the processor, after the first storage operation is completed, and a second access operation of accessing the data in the second storage device after fail of the first access operation.
Abstract: An example memory sub-system includes a memory device and a processing device, operatively coupled to the memory device. The processing device is configured to receive a read command specifying an identifier of a logical block and a page number; translate the identifier of the logical block into a physical address of a physical block stored on the memory device, wherein the physical address comprises an identifier of a memory device die; identify, based on block family metadata associated with the memory device, a block family associated with the physical block and the page number; determine a threshold voltage offset associated with the block family and the memory device die; compute a modified threshold voltage by applying the threshold voltage offset to a base read level voltage associated with the memory device die; and read, using the modified threshold voltage, data from a physical page identified by the page number within the physical block.
Abstract: A system can include a memory device and a processing device to perform operations that include identifying voltage offset bins of the memory device, each of the first voltage offset bins satisfying a first age threshold criterion, identifying one or more second voltage offset bins of the memory device, each of the second voltage offset bins satisfying a second age threshold criterion, identifying a first block family associated with one of the first voltage offset bins, and performing a first scan of a first block of the first block family by: identifying, based on determined values of the first data state metric, a first identified voltage offset bin, and identifying one or more values of a second data state metric in scan metadata generated by a second scan, and identifying, based on the one or more values of the second data state metric, a second identified voltage offset bin.
Type:
Grant
Filed:
November 16, 2020
Date of Patent:
May 24, 2022
Assignee:
Micron Technology, Inc.
Inventors:
Vamsi Pavan Rayaprolu, Shane Nowell, Michael Sheperek
Abstract: Techniques for issuing efficient writes to an erasure coded storage object in a distributed storage system are provided. In one set of embodiments, a node of the system can receive a write request for updating a logical data block of the storage object, write data/metadata for the block to a record in a data log of a metadata object of the storage object (where the metadata object is stored on a performance storage tier), place the block data in a free slot of an in-memory bank, and determine whether the in-memory bank has become full. If the in-memory bank is full, the node can further allocate a segment in a capacity object of the storage object for holding contents of the in-memory bank (where the capacity object is stored on a capacity storage tier), and write the in-memory bank contents via a full stripe write to the allocated segment.
Type:
Grant
Filed:
April 7, 2020
Date of Patent:
May 17, 2022
Assignee:
VMWARE INC.
Inventors:
Wenguang Wang, Vamsi Gunturu, Eric Knauft, Pascal Renauld
Abstract: Techniques for supporting large segments when issuing writes to an erasure coded storage object in a distributed storage system are provided. In one set of embodiments, a node of the system can pre-allocate a segment of space in a capacity object of the storage object, receive a write request for updating a logical data block of the storage object, write data/metadata for the block to a record in a data log of a metadata object of the storage object, place the block in an in-memory bank, and determine whether the in-memory bank has become full. If so, the node can compute/fill-in one or more parity blocks for each stripe of the storage object in the in-memory bank and write, based on a next sub-segment pointer pointing to a free sub-segment of the pre-allocated segment, the contents of the in-memory bank via a full stripe write to the free sub-segment.
Abstract: Various implementations described herein relate to systems and methods for a solid state drive (SSD) that includes requesting power credits while performing a program or erase operation for a flash memory of the SSD. In response to determining that the requested power credits are rejected, the program or erase operation is suspended and its power credits are released. A read operation may then be performed in response to suspending the program or erase operation and releasing its power credits.
Abstract: Examples of the present disclosure provide apparatuses and methods related to performing a sort operation in a memory. An example apparatus might include a a first group of memory cells coupled to a first sense line, a second group of memory cells coupled to a second sense line, and a controller configured to control sensing circuitry to sort a first element stored in the first group of memory cells and a second element stored in the second group of memory cells by performing an operation without transferring data via an input/output (I/O) line.
Abstract: Data processing methods, data processing apparatuses, and storage media are provided. The method is applicable to a data processing system. The data processing system includes a storage device and a programmable device. Data is transmitted between the storage device and the programmable device via a bus. A controller and an accelerator are deployed on the programmable device. The controller is enabled with at least two kinds of data format conversion function. The method includes: the controller obtaining the first data; the controller performing data format conversion on the first data to obtain second target data in a target data format; and the controller storing the second data to the storage device and/or sending the second data to the accelerator.
Type:
Grant
Filed:
November 13, 2020
Date of Patent:
April 26, 2022
Assignee:
Beijing Sensetime Technology Development Co., Ltd.
Abstract: A memory controller for controlling a memory device for storing data, the memory controller, the memory controller comprising: a request transmitter for providing a program suspend request for suspending a program operation, when the memory device receives a read request from a host while the memory device is performing the program operation and a command controller for generating and outputting a program suspend command, based on the program suspend request, and outputting a cache read command or normal read command, based on a number of commands corresponding to a request received from the host, which are queued in a command queue.