Patents Examined by Sheng-Jen Tsai
  • Patent number: 11315028
    Abstract: A method of increasing the accuracy of predicting future IO operations on a storage system includes creating a snapshot of a production volume, linking the snapshot to a thin device, mounting the thin device in a cloud tethering subsystem, and tagging the thin device to identify the thin device as being used by the cloud tethering subsystem. When data read operations are issued by the cloud tethering subsystem on the tagged thin device, the data read operations are executed by a front-end adapter of the storage system to forward data associated with the data read operations to a cloud repository. The cache manager, however, does not use information about data read operations on tagged thin devices in connection with predicting future IO operations on the cache, so that movement of snapshots to the cloud repository do not skew the algorithms being used by the cache manager to perform cache management.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: April 26, 2022
    Assignee: Dell Products, L.P.
    Inventors: Deepak Vokaliga, Rong Yu
  • Patent number: 11256446
    Abstract: A host device is configured to communicate over a network with a storage system comprising a plurality of storage devices. The host device comprises a multi-path input-output (MPIO) driver configured to control delivery of input-output (IO) operations from the host device to the storage system over a plurality of paths through the network. The MPIO driver is further configured to identify whether given ones of a plurality of initiators associated with the paths comprise given ones of a plurality of virtual initiator instances, and to identify given ones of a plurality of physical initiator components corresponding to the given ones of the virtual initiator instances. The MPIO driver is also configured to detect a failure of an IO operation over a first path, and to select a second path for retrying the IO operation based on the identification of the physical initiator components corresponding to the virtual initiator instances.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: February 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Rimpesh Patel, Amit Pundalik Anchi
  • Patent number: 11243882
    Abstract: A scheme for managing identifier pool in computer memory is provided. A memory pool array is created within a computer memory, wherein the array reserves locations in the computer memory for use by a software application. Each location in the array represents an identifier. A start identifier and end identifier are specified for the memory pool array, wherein the start identifier specifies a starting location of the array and the end identifier specifies an end location of the array. The memory pool array is initialized by creating an in-array linked list pool of identifiers for use by the software application. An identifier is allocated from the memory pool array for use by the application and released back to the memory pool array after use by the application, wherein allocation and release are managed by a set of pool control variables in the in-array linked list pool.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: February 8, 2022
    Assignee: International Business Machines Corporation
    Inventor: Joseph Liu
  • Patent number: 11243718
    Abstract: A data storage apparatus may include a first memory device comprising a first area in which write data from a host device are stored and a second area, a second memory device into which the write data stored in the first memory device are copied, a storage device, and a controller. The controller is configured to control data input/output for the first memory device, the second memory device and the storage device, wherein the controller comprises a cache manager configured to evict eviction target data from the second memory device by: storing the eviction target data into the storage device, and storing the eviction target data into the second area of the first memory device.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: February 8, 2022
    Assignee: SK hynix Inc.
    Inventor: Da Eun Song
  • Patent number: 11237963
    Abstract: Shared filesystem metadata caching is disclosed. For example, a system includes a guest with a storage controller (SC) and a metadata cache on a host with a filesystem daemon (FSD), and a host memory storing a registration table (RT). The SC receives a first metadata request associated with a file stored in the host memory. A first version identifier (VID) of metadata associated with the file is retrieved from the metadata cache and validated against a corresponding second VID in the RT. Upon determining the first VID matches the second VID, the SC responds to the first metadata request based on the metadata. Upon determining the first VID fails to match the second VID, the SC requests the FSD to update the metadata. The first VID is updated to match the second VID and the SC responds to the first metadata request based on the updated metadata.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: February 1, 2022
    Assignee: Red Hat, Inc.
    Inventors: Miklos Szeredi, Stefan Hajnoczi, Vivek Goyal, David Alan Gilbert
  • Patent number: 11221778
    Abstract: Preparing data for deduplication including in response to receiving a request to transfer data from a source storage system to a target storage system, accessing, by the source storage system, a compressed data block; generating, by the source storage system, a padded compressed data block by padding the compressed data block to conform to a fixed block size, wherein the fixed block size is greater than a size of the compressed data block; and sending, by the source storage system, the padded compressed data block to the target storage system.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: January 11, 2022
    Assignee: Pure Storage, Inc.
    Inventors: Ethan Miller, John Colgrove
  • Patent number: 11216381
    Abstract: A data storage device includes a memory device and a memory controller. The memory controller selects a predetermined memory device to receive data and accordingly records multiple logical addresses in a first mapping table. The first mapping table records which logical page the data stored in each physical page of the predetermined memory block is directed to. When the predetermined memory block is full, the memory controller edits a second mapping table and a third mapping table according to the first mapping table. The second mapping table corresponds to multiple logical pages and records which memory block and which physical page is the data of each logical page stored in. The third mapping table corresponds to the physical pages of the predetermined memory block and indicates whether each physical page is a valid page or an invalid page.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: January 4, 2022
    Assignee: Silicon Motion, Inc.
    Inventor: Kuan-Yu Ke
  • Patent number: 11216203
    Abstract: A method and a reallocation component for managing a reallocation of information from a source memory sled to a target memory sled. The source and target memory sleds comprise a respective table indicating source status for each page of the source and target memory sleds, respectively. The reallocation component initiates, for each respective source page whose status indicates that the respective source page is initialized, reallocation of the respective content allocated on each respective source page of the source memory sled to a respective target page of the target memory sled. The reallocation component sets for each respective source page whose status indicates that the respective source page is uninitialized, the respective target status for the respective target page to indicate uninitialized, while refraining from reallocating the respective content allocated on each respective source page whose status indicates that the respective source page is uninitialized.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: January 4, 2022
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Chakri Padala, Ganapathy Raman Madanagopal, Daniel Turull, Vinay Yadhav, Joao Monteiro Soares
  • Patent number: 11210026
    Abstract: Disclosed are a digital device and a method for controlling the same. The digital device includes a first memory, a second memory used as a swap space for page data in the first memory, and a controller that controls the page data to be swapped out and written in the second memory, and controls the page data written in the second memory to be swapped into the first memory, wherein the controller prevents a write operation of the page data into the second memory, based on a state of the second memory associated with write of the page data, and allows a read-only operation of the page data written in the second memory.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: December 28, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Gunho Lee, Sungho Bae
  • Patent number: 11194707
    Abstract: Systems and methods of building massively parallel computing systems using low power computing complexes in accordance with embodiments of the invention are disclosed. A massively parallel computing system in accordance with one embodiment of the invention includes at least one Solid State Blade configured to communicate via a high performance network fabric. In addition, each Solid State Blade includes a processor configured to communicate with a plurality of low power computing complexes interconnected by a router, and each low power computing complex includes at least one general processing core, an accelerator, an I/O interface, and cache memory and is configured to communicate with non-volatile solid state memory.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: December 7, 2021
    Assignee: California Institute of Technology
    Inventor: Mark A. Stalzer
  • Patent number: 11194510
    Abstract: A storage device and a method of operating a storage device including a non-volatile memory. The method includes selecting a first task from among a plurality of tasks queued in a task queue of the storage device; determining whether a mode of the first task is identical to a mode of a previously-executed task; and determining an execution order of the first task according to a result of determination. The modes and the region addresses of tasks may be utilized to group tasks to permit interleaving of programming data.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: December 7, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyun-chul Park, Young-pil Song, Sang-won Jung
  • Patent number: 11194736
    Abstract: A memory controller may include a map cache configured to store one or more of a plurality of map data sub-segments respectively corresponding to a plurality of sub-areas included in each of the plurality of areas, and a map data manager configured to generate information about a map data sub-segment to be provided to a host and which is determined based on a read count for the memory device, and generate information about a map data segment to be deleted from the host and which is determined based on the read count for the memory device and a memory of the host.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: December 7, 2021
    Assignee: SK hynix Inc.
    Inventors: Hye Mi Kang, Eu Joon Byun
  • Patent number: 11182077
    Abstract: A method for determining when to load read I/O operations into an SSD cache medium for a physical storage medium of a data storage system can include maintaining an SSD filter bitmap with a plurality of bits, where each of the bits corresponds to a respective data block of the physical storage medium. The method can also include initially setting each of the bits to a first predetermined value, receiving a first read I/O operation directed to a particular data block of the physical storage medium and, in response to receiving the first read I/O operation, setting a bit corresponding to the particular data block to a second predetermined value. The method can further include receiving a second read I/O operation directed to the particular data block and, in response to receiving the second I/O operation, loading data for the particular data block into the SSD cache medium.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: November 23, 2021
    Assignee: AmZetta Technologies, LLC
    Inventors: Paresh Chatterjee, Srikumar Subramanian, Narayanaswami Ganapathy, Senthilkumar Ramasamy
  • Patent number: 11137926
    Abstract: The disclosed computer-implemented method for automatic storage tiering may include (1) receiving characteristics of previous accesses to storage system objects stored in a data storage system including multiple storage tiers, (2) generating, based on the characteristics of previous accesses to the storage system objects, a model that predicts characteristics of future accesses to the storage system objects, (3) selecting, based on the model, a next storage tier of the multiple storage tiers for each of the storage system objects, and (4) relocating at least some of the storage system objects from a current storage tier to the next storage tier selected for each of the at least some of the storage system objects. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: October 5, 2021
    Assignee: Veritas Technologies LLC
    Inventors: Niranjan Pendharkar, Anindya Banerjee, Naveen Ramachandrappa, Ramya Mula
  • Patent number: 11132139
    Abstract: System and methods for selectively or automatically migrating resources between storage operation cells are provided. In accordance with one aspect of the invention, a management component within the storage operation system may monitor system operation and migrate components from storage operation cell to another to facilitate failover recovery, promote load balancing within the system and improve overall system performance as further described herein. Another aspect of the invention may involve performing certain predictive analyses on system operation to reveal trends and tendencies within the system. Such information may be used as the basis for potentially migrating components from one storage operation cell to another to improve system performance and reduce or eliminate resource exhaustion or congestion conditions.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: September 28, 2021
    Assignee: Commvault Systems, Inc.
    Inventors: Srinivas Kavuri, Marcus S. Muller
  • Patent number: 11132145
    Abstract: Disclosed herein are techniques for reducing write amplification when processing write commands directed to a non-volatile memory. According to some embodiments, the method can include the steps of (1) receiving a first plurality of write commands and a second plurality of write commands, where the first plurality of write commands and the second plurality of write commands are separated by a fence command (2) caching the first plurality of write commands, the second plurality of write commands, and the fence command, and (3) in accordance with the fence command, and in response to identifying that at least one condition is satisfied: (i) issuing the first plurality of write commands to the non-volatile memory, (ii) issuing the second plurality of write commands to the non-volatile memory, and (iii) updating log information to reflect that the first plurality of write commands precede the second plurality of write commands.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: September 28, 2021
    Assignee: Apple Inc.
    Inventors: Yuhua Liu, Andrew W. Vogan, Matthew J. Byom, Alexander Paley
  • Patent number: 11113231
    Abstract: In a processing in memory (PIM) method using a memory device, m*n multiplicand arrangement bits are stored in m*n memory cells by copying and arranging m multiplicand bits of a multiplicand value and m*n multiplier arrangement bits are stored in m*n read-write unit circuits corresponding to the m*n memory cells by copying and arranging n multiplier bits of a multiplier value. The m*n multiplicand arrangement bits stored in the m*n memory cells are selectively read based on the m*n multiplier arrangement bits stored in the m*n read-write unit circuits, and m*n multiplication bits are stored in the m*n read-write unit circuits based on the selectively read m*n multiplicand arrangement bits. A multiplication value of the multiplicand value and the multiplier value is determined based on the m*n multiplication bits stored in the m*n read-write unit circuits.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: September 7, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Youngsun Song
  • Patent number: 11106374
    Abstract: A method is used in managing inline data de-duplication in storage systems. The method receives a request to write data at a logical address of a file in a file system of a storage system. The method determines whether the data can be de-duplicated to matching data residing on the storage system in a compressed format. Based on the determination, the method uses a block mapping pointer associated with the matching data to de-duplicate the data. The block mapping pointer includes a block mapping of a set of compressed data extents and information regarding location of the matching data within the set of compressed data extents.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: August 31, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Philippe Armangau, Christopher Seibel, Bruce Caram, Alexei Karaban
  • Patent number: 11106592
    Abstract: The present invention is directed to a system and method which employ two memory access paths: 1) a cache-access path in which block data is fetched from main memory for loading to a cache, and 2) a direct-access path in which individually-addressed data is fetched from main memory. The system may comprise one or more processor cores that utilize the cache-access path for accessing data. The system may further comprise at least one heterogeneous functional unit that is operable to utilize the direct-access path for accessing data. In certain embodiments, the one or more processor cores, cache, and the at least one heterogeneous functional unit may be included on a common semiconductor die (e.g., as part of an integrated circuit). Embodiments of the present invention enable improved system performance by selectively employing the cache-access path for certain instructions while selectively employing the direct-access path for other instructions.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: August 31, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Steven J. Wallach, Tony M. Brewer
  • Patent number: 11093170
    Abstract: Techniques are provided for splitting a computer dataset between multiple storage locations based on a workload footprint analysis of that dataset. As a computer accesses data storage, its input/output (I/O) access can be monitored, as well as a working dataset of that dataset. The I/O access patterns can be used to determine an application of the computer that is generating the I/O. The application and the working dataset can be used to determine a split for the dataset across multiple storage locations. The dataset can then be split according to the determined split.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: August 17, 2021
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Alexey Fomin, Yuri Zagrebin, Nickolay Dalmatov