Patents by Inventor Wenlin CUI

Wenlin CUI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12293100
    Abstract: This application discloses a data writing method. A network controller performs erasure code encoding on original data, and writes a third quantity of target data blocks of a plurality of obtained target data blocks into a storage node. The network controller reads a first quantity of target data blocks of the third quantity of the target data blocks from the storage node, and decodes the read target data blocks. The plurality of target data blocks include a first quantity of original data blocks and a second quantity of check data blocks, two ends of a target data block include same version information, and the third quantity is greater than the first quantity.
    Type: Grant
    Filed: January 17, 2024
    Date of Patent: May 6, 2025
    Assignees: Huawei Technologies Co., Ltd., Tsinghua University
    Inventors: Jiwu Shu, Youyou Lu, Jian Gao, Xiaodong Tan, Wenlin Cui
  • Patent number: 12223171
    Abstract: A metadata processing method includes a network interface card in a storage device that receives an input/output (I/O) request, where the I/O request includes a data read request or a data write request; the network interface card executes a metadata processing task corresponding to the I/O request; and when determining that the metadata processing task fails to be executed, the network interface card requests a CPU in the storage device to execute the metadata processing task.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: February 11, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Chen Wang, Meng Gao, Wenlin Cui, Siwei Luo, Ren Ren
  • Patent number: 12216929
    Abstract: A storage system includes multiple storage nodes. Each storage node includes a first storage device of a first type and a second storage device of a second type, and a performance level of the first storage device is higher than the second storage device. The globe cache includes a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier.
    Type: Grant
    Filed: December 3, 2023
    Date of Patent: February 4, 2025
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
  • Publication number: 20250021261
    Abstract: A converged system includes a computing node and a storage node. The computing node is connected to the storage node through a network to build a storage-computing decoupled architecture. A storage medium of the computing node and a storage medium of the storage node form a global memory pool through unified addressing, namely, a global storage medium shared by the computing node and the storage node. When a read/write operation is performed in the system, processing request data is obtained, and a memory operation of the processing request data is performed on the global memory pool based on a memory operation instruction.
    Type: Application
    Filed: September 30, 2024
    Publication date: January 16, 2025
    Inventors: Hongwei Sun, Guangcheng Li, Huawei Liu, Jun You, Wenlin Cui
  • Publication number: 20240354267
    Abstract: A data storage system includes a host, an adapter card, and a storage node. The host establishes a communication connection to the adapter card through a bus, and the storage node establishes a communication connection to the adapter card through a network. The storage node is configured to store data that the host requests to write into a first memory space. The first memory space is a storage space that is provided by the adapter card for the host and that supports memory semantic access. The adapter card writes the data into a second memory space of the storage node, where the adapter card includes a first correspondence between a physical address of the second memory space and an address of the first memory space.
    Type: Application
    Filed: June 28, 2024
    Publication date: October 24, 2024
    Inventors: Yue Zhao, Wenlin Cui, Siwei Luo
  • Publication number: 20240211136
    Abstract: A service system and a memory management method and apparatus are provided. The service system includes a plurality of service nodes. A memory of at least one of the plurality of service nodes is divided into a local resource and a global resource. The local resource is used to provide memory storage space for a local service node, the global resource of the at least one service node forms a memory pool, and the memory pool is used to provide memory storage space for the plurality of service nodes. When a specific condition is satisfied, at least a part of space in the local resource is transferred to the memory pool.
    Type: Application
    Filed: March 8, 2024
    Publication date: June 27, 2024
    Inventors: Ren Ren, Chuan Liu, Yue Zhao, Wenlin Cui
  • Publication number: 20240152290
    Abstract: This application discloses a data writing method. A network controller performs erasure code encoding on original data, and writes a third quantity of target data blocks of a plurality of obtained target data blocks into a storage node. The network controller reads a first quantity of target data blocks of the third quantity of the target data blocks from the storage node, and decodes the read target data blocks. The plurality of target data blocks include a first quantity of original data blocks and a second quantity of check data blocks, two ends of a target data block include same version information, and the third quantity is greater than the first quantity.
    Type: Application
    Filed: January 17, 2024
    Publication date: May 9, 2024
    Inventors: Jiwu Shu, Youyou Lu, Jian Gao, Xiaodong Tan, Wenlin Cui
  • Publication number: 20240094936
    Abstract: A storage system includes multiple storage nodes. Each storage node includes a first storage device of a first type and a second storage device of a second type, and a performance level of the first storage device is higher than the second storage device. The globe cache includes a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier.
    Type: Application
    Filed: December 3, 2023
    Publication date: March 21, 2024
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
  • Patent number: 11899939
    Abstract: A read/write request processing method and server are provided. In this method, each terminal is grouped, and different service durations are assigned for all terminal groups, so that a server can process, within any service duration, only a read/write request sent by a terminal in a terminal group corresponding to the service duration. According to the application, a cache area of a network interface card of the server is enabled to store only limited quantities of queue pairs (QPs) and work queue elements (WQEs), thereby preventing uneven resource distribution in the cache area of the network interface card.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: February 13, 2024
    Assignees: Huawei Technologies Co., Ltd., TSINGHUA UNIVERSITY
    Inventors: Jiwu Shu, Youmin Chen, Youyou Lu, Wenlin Cui
  • Patent number: 11861204
    Abstract: A storage system includes a management node and multiple storage nodes. Each storage node includes a first storage device of a first type (e.g., DRAM) and a second storage device of a second type (e.g., SCM), and a performance level of the first storage device is higher than the second storage device. The management node creates a globe cache including a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier of the globe cache.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: January 2, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
  • Publication number: 20230124520
    Abstract: Task execution methods and devices are provided. In an implementation, a method comprises: obtaining, by a central processing unit of a storage device, a data processing task, dividing, by the central processing unit, the data processing task into subtasks, and allocating, by the central processing unit, a first subtask in the subtasks to a first dedicated processor based on attributes of the subtasks, wherein the first dedicated processor is one of a plurality of dedicated processors of the storage device.
    Type: Application
    Filed: December 16, 2022
    Publication date: April 20, 2023
    Inventors: Kan ZHONG, Wenlin CUI
  • Publication number: 20230105067
    Abstract: A metadata processing method includes a network interface card in a storage device that receives an input/output (I/O) request, where the I/O request includes a data read request or a data write request; the network interface card executes a metadata processing task corresponding to the I/O request; and when determining that the metadata processing task fails to be executed, the network interface card requests a CPU in the storage device to execute the metadata processing task.
    Type: Application
    Filed: December 9, 2022
    Publication date: April 6, 2023
    Inventors: Chen Wang, Meng Gao, Wenlin Cui, Siwei Luo, Ren Ren
  • Publication number: 20220057954
    Abstract: A storage system includes a management node and multiple storage nodes. Each storage node includes a first storage device of a first type (e.g., DRAM) and a second storage device of a second type (e.g., SCM), and a performance level of the first storage device is higher than the second storage device. The management node creates a globe cache including a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier of the globe cache.
    Type: Application
    Filed: October 26, 2021
    Publication date: February 24, 2022
    Applicant: HUAWEI TECHNOLOGIES CO.,LTD.
    Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
  • Publication number: 20210334011
    Abstract: A read/write request processing method and server are provided. In this method, each terminal is grouped, and different service durations are assigned for all terminal groups, so that a server can process, within any service duration, only a read/write request sent by a terminal in a terminal group corresponding to the service duration. According to the application, a cache area of a network interface card of the server is enabled to store only limited quantities of queue pairs (QPs) and work queue elements (WQEs), thereby preventing uneven resource distribution in the cache area of the network interface card.
    Type: Application
    Filed: July 9, 2021
    Publication date: October 28, 2021
    Inventors: Jiwu SHU, Youmin CHEN, Youyou LU, Wenlin CUI
  • Patent number: 8909868
    Abstract: A method and a system for controlling quality of service of a storage system, and a storage system. The method includes: collecting information about processing capabilities of the hard disks in the storage system and obtaining processing capabilities of the hard disks according to the information about processing capabilities; dividing a cache into multiple cache tiers according to the processing capabilities of the hard disks; and writing, for a cache tier in which dirty data reaches a preset threshold, data in the cache tier into at least one hard disk corresponding to the cache tier. The method avoids a phenomenon of preempting page resources in the cache.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: December 9, 2014
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Wenlin Cui, Qiyao Wang, Mingquan Zhou, Tan Shu, Honglei Wang
  • Publication number: 20140258609
    Abstract: A method and a system for controlling quality of service of a storage system, and a storage system. The method includes: collecting information about processing capabilities of the hard disks in the storage system and obtaining processing capabilities of the hard disks according to the information about processing capabilities; dividing a cache into multiple cache tiers according to the processing capabilities of the hard disks; and writing, for a cache tier in which dirty data reaches a preset threshold, data in the cache tier into at least one hard disk corresponding to the cache tier. The method avoids a phenomenon of preempting page resources in the cache.
    Type: Application
    Filed: May 21, 2014
    Publication date: September 11, 2014
    Applicant: Huawei Technologies Co., Ltd.
    Inventors: Wenlin CUI, Qiyao WANG, Mingquan ZHOU, Tan SHU, Honglei WANG