Patents by Inventor Wenlin CUI
Wenlin CUI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12293100Abstract: This application discloses a data writing method. A network controller performs erasure code encoding on original data, and writes a third quantity of target data blocks of a plurality of obtained target data blocks into a storage node. The network controller reads a first quantity of target data blocks of the third quantity of the target data blocks from the storage node, and decodes the read target data blocks. The plurality of target data blocks include a first quantity of original data blocks and a second quantity of check data blocks, two ends of a target data block include same version information, and the third quantity is greater than the first quantity.Type: GrantFiled: January 17, 2024Date of Patent: May 6, 2025Assignees: Huawei Technologies Co., Ltd., Tsinghua UniversityInventors: Jiwu Shu, Youyou Lu, Jian Gao, Xiaodong Tan, Wenlin Cui
-
Patent number: 12223171Abstract: A metadata processing method includes a network interface card in a storage device that receives an input/output (I/O) request, where the I/O request includes a data read request or a data write request; the network interface card executes a metadata processing task corresponding to the I/O request; and when determining that the metadata processing task fails to be executed, the network interface card requests a CPU in the storage device to execute the metadata processing task.Type: GrantFiled: December 9, 2022Date of Patent: February 11, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Chen Wang, Meng Gao, Wenlin Cui, Siwei Luo, Ren Ren
-
Patent number: 12216929Abstract: A storage system includes multiple storage nodes. Each storage node includes a first storage device of a first type and a second storage device of a second type, and a performance level of the first storage device is higher than the second storage device. The globe cache includes a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier.Type: GrantFiled: December 3, 2023Date of Patent: February 4, 2025Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
-
Publication number: 20250021261Abstract: A converged system includes a computing node and a storage node. The computing node is connected to the storage node through a network to build a storage-computing decoupled architecture. A storage medium of the computing node and a storage medium of the storage node form a global memory pool through unified addressing, namely, a global storage medium shared by the computing node and the storage node. When a read/write operation is performed in the system, processing request data is obtained, and a memory operation of the processing request data is performed on the global memory pool based on a memory operation instruction.Type: ApplicationFiled: September 30, 2024Publication date: January 16, 2025Inventors: Hongwei Sun, Guangcheng Li, Huawei Liu, Jun You, Wenlin Cui
-
Publication number: 20240354267Abstract: A data storage system includes a host, an adapter card, and a storage node. The host establishes a communication connection to the adapter card through a bus, and the storage node establishes a communication connection to the adapter card through a network. The storage node is configured to store data that the host requests to write into a first memory space. The first memory space is a storage space that is provided by the adapter card for the host and that supports memory semantic access. The adapter card writes the data into a second memory space of the storage node, where the adapter card includes a first correspondence between a physical address of the second memory space and an address of the first memory space.Type: ApplicationFiled: June 28, 2024Publication date: October 24, 2024Inventors: Yue Zhao, Wenlin Cui, Siwei Luo
-
Publication number: 20240211136Abstract: A service system and a memory management method and apparatus are provided. The service system includes a plurality of service nodes. A memory of at least one of the plurality of service nodes is divided into a local resource and a global resource. The local resource is used to provide memory storage space for a local service node, the global resource of the at least one service node forms a memory pool, and the memory pool is used to provide memory storage space for the plurality of service nodes. When a specific condition is satisfied, at least a part of space in the local resource is transferred to the memory pool.Type: ApplicationFiled: March 8, 2024Publication date: June 27, 2024Inventors: Ren Ren, Chuan Liu, Yue Zhao, Wenlin Cui
-
Publication number: 20240152290Abstract: This application discloses a data writing method. A network controller performs erasure code encoding on original data, and writes a third quantity of target data blocks of a plurality of obtained target data blocks into a storage node. The network controller reads a first quantity of target data blocks of the third quantity of the target data blocks from the storage node, and decodes the read target data blocks. The plurality of target data blocks include a first quantity of original data blocks and a second quantity of check data blocks, two ends of a target data block include same version information, and the third quantity is greater than the first quantity.Type: ApplicationFiled: January 17, 2024Publication date: May 9, 2024Inventors: Jiwu Shu, Youyou Lu, Jian Gao, Xiaodong Tan, Wenlin Cui
-
Publication number: 20240094936Abstract: A storage system includes multiple storage nodes. Each storage node includes a first storage device of a first type and a second storage device of a second type, and a performance level of the first storage device is higher than the second storage device. The globe cache includes a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier.Type: ApplicationFiled: December 3, 2023Publication date: March 21, 2024Applicant: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
-
Patent number: 11899939Abstract: A read/write request processing method and server are provided. In this method, each terminal is grouped, and different service durations are assigned for all terminal groups, so that a server can process, within any service duration, only a read/write request sent by a terminal in a terminal group corresponding to the service duration. According to the application, a cache area of a network interface card of the server is enabled to store only limited quantities of queue pairs (QPs) and work queue elements (WQEs), thereby preventing uneven resource distribution in the cache area of the network interface card.Type: GrantFiled: July 9, 2021Date of Patent: February 13, 2024Assignees: Huawei Technologies Co., Ltd., TSINGHUA UNIVERSITYInventors: Jiwu Shu, Youmin Chen, Youyou Lu, Wenlin Cui
-
Patent number: 11861204Abstract: A storage system includes a management node and multiple storage nodes. Each storage node includes a first storage device of a first type (e.g., DRAM) and a second storage device of a second type (e.g., SCM), and a performance level of the first storage device is higher than the second storage device. The management node creates a globe cache including a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier of the globe cache.Type: GrantFiled: October 26, 2021Date of Patent: January 2, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
-
Publication number: 20230124520Abstract: Task execution methods and devices are provided. In an implementation, a method comprises: obtaining, by a central processing unit of a storage device, a data processing task, dividing, by the central processing unit, the data processing task into subtasks, and allocating, by the central processing unit, a first subtask in the subtasks to a first dedicated processor based on attributes of the subtasks, wherein the first dedicated processor is one of a plurality of dedicated processors of the storage device.Type: ApplicationFiled: December 16, 2022Publication date: April 20, 2023Inventors: Kan ZHONG, Wenlin CUI
-
Publication number: 20230105067Abstract: A metadata processing method includes a network interface card in a storage device that receives an input/output (I/O) request, where the I/O request includes a data read request or a data write request; the network interface card executes a metadata processing task corresponding to the I/O request; and when determining that the metadata processing task fails to be executed, the network interface card requests a CPU in the storage device to execute the metadata processing task.Type: ApplicationFiled: December 9, 2022Publication date: April 6, 2023Inventors: Chen Wang, Meng Gao, Wenlin Cui, Siwei Luo, Ren Ren
-
Publication number: 20220057954Abstract: A storage system includes a management node and multiple storage nodes. Each storage node includes a first storage device of a first type (e.g., DRAM) and a second storage device of a second type (e.g., SCM), and a performance level of the first storage device is higher than the second storage device. The management node creates a globe cache including a first tier comprising the first storage device in each storage node, and a second tier comprising the second storage device in each storage node. The first tier is for storing data with a high access frequency, and the second tier is for storing data with a low access frequency. The management node monitors an access frequency of target data stored in the first tier. When the access frequency of the target data is lower than a threshold, the management node instructs the first storage node to migrate the target data from the first tier to the second tier of the globe cache.Type: ApplicationFiled: October 26, 2021Publication date: February 24, 2022Applicant: HUAWEI TECHNOLOGIES CO.,LTD.Inventors: Wenlin Cui, Keji Huang, Peng Zhang, Siwei Luo
-
Publication number: 20210334011Abstract: A read/write request processing method and server are provided. In this method, each terminal is grouped, and different service durations are assigned for all terminal groups, so that a server can process, within any service duration, only a read/write request sent by a terminal in a terminal group corresponding to the service duration. According to the application, a cache area of a network interface card of the server is enabled to store only limited quantities of queue pairs (QPs) and work queue elements (WQEs), thereby preventing uneven resource distribution in the cache area of the network interface card.Type: ApplicationFiled: July 9, 2021Publication date: October 28, 2021Inventors: Jiwu SHU, Youmin CHEN, Youyou LU, Wenlin CUI
-
Patent number: 8909868Abstract: A method and a system for controlling quality of service of a storage system, and a storage system. The method includes: collecting information about processing capabilities of the hard disks in the storage system and obtaining processing capabilities of the hard disks according to the information about processing capabilities; dividing a cache into multiple cache tiers according to the processing capabilities of the hard disks; and writing, for a cache tier in which dirty data reaches a preset threshold, data in the cache tier into at least one hard disk corresponding to the cache tier. The method avoids a phenomenon of preempting page resources in the cache.Type: GrantFiled: May 21, 2014Date of Patent: December 9, 2014Assignee: Huawei Technologies Co., Ltd.Inventors: Wenlin Cui, Qiyao Wang, Mingquan Zhou, Tan Shu, Honglei Wang
-
Publication number: 20140258609Abstract: A method and a system for controlling quality of service of a storage system, and a storage system. The method includes: collecting information about processing capabilities of the hard disks in the storage system and obtaining processing capabilities of the hard disks according to the information about processing capabilities; dividing a cache into multiple cache tiers according to the processing capabilities of the hard disks; and writing, for a cache tier in which dirty data reaches a preset threshold, data in the cache tier into at least one hard disk corresponding to the cache tier. The method avoids a phenomenon of preempting page resources in the cache.Type: ApplicationFiled: May 21, 2014Publication date: September 11, 2014Applicant: Huawei Technologies Co., Ltd.Inventors: Wenlin CUI, Qiyao WANG, Mingquan ZHOU, Tan SHU, Honglei WANG