Abstract: A memory system includes a memory device including dies, each of the dies including planes, each of the planes including blocks, each of the blocks including pages; and a controller suitable for controlling the memory device, the controller comprising: a memory including a mapping table which includes map chunks generated through dividing map data into map chunks each of a unit size; a pattern determination engine suitable for determining patterns with respect to each of the map chunks received from the memory; and a compression engine suitable for determining whether to perform compression on the map chunks, based on pattern determination results for the map chunks determined by the pattern determination engine, and performing compression on those map chunks for which performing compression was determined.
Abstract: A system and method for managing data includes identifying, in response to a storage request from a tenant system, a first data protection pool based on a data classification analysis performed on data associated with the storage request and initiating storage of data associated with the storage request in a first storage system associated with the first data protection pool. A pattern matching model and data sampled from the tenant system may be used to identify data characteristics, which may include data type, data retention, data sensitivity, and data location. At least some data characteristics may be obtained using a plugin to a tenant system on which the data associated with the storage request is stored.
Abstract: Methods, systems, and machine-readable storage medium for multi- tier data recovery utilizing a series of progressively more complex detection and decoding modes based on data from additional pages or wordlines. In one aspect, read data is obtained from at least one cell comprising a given page of a flash memory, and reliability values are generated for the cell from the read data. The reliability values are utilized to decode the read data for the given page. If the decoding of the read data fails, a series of successive decoding steps is performed, with each successive decoding step utilizing additional read data to generate reliability values for the decoding. In one example, reads of one or more additional pages in the same wordline are performed. In a second example, several read retries (soft reads) of the same wordline are performed. In a third example, one or more additional neighboring wordlines are read.
Type:
Grant
Filed:
December 21, 2017
Date of Patent:
February 23, 2021
Assignee:
Seagate Technology LLC
Inventors:
Erich F. Haratsch, AbdelHakim S. Alhussien
Abstract: An arithmetic processor includes a request generation circuit which generates an information request including a request address. A translation buffer associates a virtual address of a page with a physical address (PA). A page-table buffer associates data in a page table in a level other than the last level with a PA of the data, and stores the associated data and address. A controller circuit obtains, from the request address, a PA of data in a page table to be accessed when the request address is not stored in the translation buffer. The controller circuit searches in the page-table buffer for the data when the page table to be accessed is in a level other than the last level. The controller circuit obtains the data from a memory, such as a cache memory or a main memory, when the page table to be accessed is in the last level, and registers the data in the translation buffer. The translation buffer may output an erase signal to invalidate all entries in the page-table buffer.
Abstract: Methods, devices and computer program products for data backup are disclosed. The method includes receiving, from a destination node, a workload of a backup job, the workload being determined by the destination node in response to a request for the backup job from a source node, and determining a hardware configuration to be allocated to a proxy virtual machine deployed in a plurality of virtual machines on the source node based on the workload, the proxy virtual machine including a backup application for performing data backup for the plurality of virtual machines. The method further includes transmitting an indication of the hardware configuration to the proxy virtual machine to enable the backup application to perform the backup job using the hardware configuration. The workload may comprise a data change rate of the source node and a backup rate of the proxy virtual machine that are predicted based on history data stored at the destination node.
Abstract: Method and apparatus for managing data in a data storage system. A storage array controller device is coupled to a plurality of storage devices by an external data path, with the storage devices used for non-volatile memory (NVM) storage of user data from a host. A copy back operation is initiated by issuing a copy back transfer command that identifies a selected data set stored in a source device and a unique identifier (ID) value that identifies a destination device. A peer-to-peer connection is established over the external data path in response to the copy back transfer command so that the selected data set is transferred from the source device to the destination device while bypassing the storage array controller device. Normal data transfers can be carried out between the storage array controller and the respective source and destination devices during the copy back operation.
Abstract: An apparatus includes a first processor that generates first control signals to control a first circuit to perform memory operations on memory cells. A first number of first physical signal lines delivers the first control signals to a conversion circuit. A second number of second physical signal lines delivers converted control signals to the first circuit. The conversion circuit is coupled by the first number of first physical signal lines to the first processor and by the second number of second physical signal lines to the first circuit. The conversion circuit converts the first control signals to the converted control signals, and outputs the converted control signals to the first circuit. The first number of first physical signal lines is less than the second number of second physical signal lines to reduce the first number of first physical signal lines coupled between the first processor and the first circuit.
Type:
Grant
Filed:
June 8, 2018
Date of Patent:
February 2, 2021
Assignee:
SanDisk Technologies LLC
Inventors:
Tai-Yuan Tseng, Hiroyuki Mizukoshi, Chi-Lin Hsu, Yan Li
Abstract: A method of compressing data in a mass storage medium of a computer system running an operating system (OS), such as Windows®, is disclosed. The computer system comprises a central processing unit (CPU), random access memory (RAM), and a non-transitory mass storage medium. The method includes accepting an operator indication of a desired degree of data compression, selecting a predefined compression method corresponding to the operator indication and the version of the operating system in use, and designating a selected predefined set of files and directories stored on the mass storage medium as uncompressible.
Abstract: A computing device includes an interface configured to interface and communicate with a dispersed storage network (DSN), a memory that stores operational instructions, and processing circuitry operably coupled to the interface and to the memory. The processing circuitry is configured to execute the operational instructions to perform various operations and functions. The computing device detects at least one available memory device within a storage unit (SU). The computing device identifies storage capacities of each of the memory devices within the SU and identifies a DSN address range associated with the SU. The computing device maps the DSN address range to each of the memory devices within the SU based on the storage capacities to generate a memory mapping of the memory devices within the SU. The computing device then facilitates redistribution of some EDS from a first memory device to the at least one available memory device within the SU.
Type:
Grant
Filed:
January 24, 2019
Date of Patent:
January 26, 2021
Assignee:
PURE STORAGE, INC.
Inventors:
Manish Motwani, Joseph M. Kaczmarek, Jason K. Resch
Abstract: Disclosed herein are techniques for balancing and reducing the number of write operations performed to each physical memory page of a storage-class memory. In one embodiment, a method includes tracking a count of write operations performed to each physical memory page or subpage of the storage-class memory using a memory management unit, a memory controller, a hypervisor, or an operating system, and selectively allocating physical memory pages of the storage-class memory with the least counts of write operations to a virtual machine or an operating system process using a ranking of the physical memory pages of the storage-class memory determined based at least partially on the count of write operations performed to each physical memory page or subpage of the storage-class memory.
Type:
Grant
Filed:
February 28, 2017
Date of Patent:
January 26, 2021
Assignee:
Amazon Technologies, Inc.
Inventors:
Nafea Bshara, Thomas A. Volpe, Adi Habusha
Abstract: A data synchronization method, system and apparatus are provided. The method includes receiving a request including data to be uploaded, from a client, and responding to the request after data is successfully obtained from the client, and storing the obtained data. For data whose size is less than a threshold value, a synchronization request is sent to a standby server to request the standby server to store the data. Otherwise a second type work log, including information indicating that data that has not been synchronized, is generated and stored. Data whose size is greater than or equal to the threshold value is not synchronized immediately, but is recorded in the work log. In some cases, data whose size is less than the threshold value, but fails to be synchronized, is recorded in the work log. Synchronization of this data may be subsequently completed according to the work log, so that the synchronization of this data can avoid affecting synchronization of other data.
Type:
Grant
Filed:
July 20, 2018
Date of Patent:
January 19, 2021
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Inventors:
Ling Zhou, Zheng Chen, Jun Ming Yan, Cheng Wu, Feng Bo Jiang, Li Zhang, Fang Zhou Chen
Abstract: A wireless communication device (UE) may include random access memory and associated software configured to selectively place different memory banks into either an active power on mode, retention mode, or power off mode. The selective placement of memory banks into different modes may be performed based on a variety of factors including software module voting information, a current power mode of the memory banks, one or more software program(s) and/or data currently stored on the memory banks, and a counter that counts an amount of time during which a memory bank is not accessed. The placement of memory banks into different modes may be controlled by a memory controller coupled to the memory banks.
Abstract: Managing input/output (‘I/O’) queues in a data storage system, including: receiving, by a host that is coupled to a plurality of storage devices via a storage network, a plurality of I/O operations to be serviced by a target storage device; determining, for each of a plurality of paths between the host and the target storage device, a data transfer maximum associated with the path; determining, for one or more of the plurality of paths, a cumulative amount of data to be transferred by I/O operations pending on the path; and selecting a target path for transmitting one or more of the plurality of I/O operations to the target storage device in dependence upon the cumulative amount of data to be transferred by I/O operations pending on the path and the data transfer maximum associated with the path.
Abstract: A processor is described. The processor includes a network. A plurality of processing cores are coupled to the network. The processor includes a transmitter circuit coupled to the network. The transmitter circuit is to transmit output data generated by one of the processing cores into the network. The transmitter circuit includes control logic circuitry to cause the transmitter circuit to send a request for transmission of a second packet of output data prior to completion of the transmitter circuit's transmission of an earlier first packet of output data.
Type:
Grant
Filed:
May 15, 2017
Date of Patent:
December 22, 2020
Assignee:
Google LLC
Inventors:
Jason Redgrave, Albert Meixner, Qiuling Zhu, Ji Kim, Artem Vasilyev, Ofer Shacham
Abstract: Attributing consumed storage capacity among entities storing data in a storage array includes: identifying a data object stored in the storage array and shared by a plurality of entities, where the data object occupies an amount of storage capacity of the storage array; and attributing to each entity a fractional portion of the amount of storage capacity occupied by the data object.
Type:
Grant
Filed:
May 1, 2019
Date of Patent:
December 15, 2020
Assignee:
Pure Storage, Inc.
Inventors:
Jianting Cao, Martin Harriman, John Hayes, Cary Sandvig
Abstract: Methods, systems, and computer readable media for intelligent fetching of storage device commands from submission queues are disclosed. The controller may implement a hierarchical scheme comprising first-level arbitration(s) between submission queues of each of a plurality of input/output virtualization (IOV) functions, and a second-level arbitration between the respective IOV functions. Alternatively, or in addition, the controller may implement a flat arbitration scheme, which may comprise selecting submission queue(s) from one or more groups, each group comprising submission queues of each of the plurality of IOV functions. In some embodiments, the controller implements a credit-based arbitration scheme. The arbitration scheme(s) may be modified in accordance with command statistics and/or current resource availability.
Abstract: Provided is a method and apparatus for processing instructions using a processing-in-memory (PIM). A PIM management apparatus includes: a PIM directory comprising a reader-writer lock regarding a memory address that an instruction accesses; and a locality tracer configured to figure out locality regarding the memory address that the instruction accesses and determine whether or not an object that executes the instruction is a PIM.
Abstract: A memory is disclosed comprising a first memory portion, a second memory portion, and an interface, wherein the memory portions are electrically isolated from each other and the interface is capable of receiving a row command and a column command in the time it takes to cycle the memory once. By interleaving access requests (comprising row commands and column commands) to the different portions of the memory, and by properly timing these access requests, it is possible to achieve full data bus utilization in the memory without increasing data granularity.
Abstract: An uneven distributed storage across a mesh fabric storage system may include receiving storage operations from one or more client devices and/or applications contemporaneously with receiving availability messaging from a set of multiple storage devices that may be of the same or different types. One or more of the storage operations may be assigned to a storage device that has signaled its readiness to perform the one or more storage operations via an issued availability message. Each storage device may thereby perform a subset of the collective set of storage operations with the uneven distribution allocating load that is directly commensurate with the performance of each storage device. Stored data may be moved between storage devices using a similar availability-driven methodology so as to reallocate capacity usage while still providing the fastest storage performance associated with all storage devices writing the data as it is generated.
Abstract: This application sets forth techniques for managing the allocation of memory storage space in a non-volatile memory to improve the operation of a camera application. A camera application monitors an amount of available memory storage space in the non-volatile memory. Responsive to various triggering events, the camera application compares the amount of available memory storage space to a threshold value. When the amount of available memory storage space is less than the threshold value, the camera application transmits a request to a background service to free additional memory storage space within a temporary data store associated with one or more applications installed on the computing device. The temporary data store provides a location for local data to improve the efficiency of the applications, which can be exploited by the camera application to free up memory to avoid a low-memory condition that could prevent the camera application from performing certain operations.
Type:
Grant
Filed:
September 20, 2018
Date of Patent:
December 1, 2020
Assignee:
Apple Inc.
Inventors:
Kazuhisa Yanagihara, Benjamin P. Englert, Cameron S. Birse, Susan M. Grady