Patents by Inventor Guo-Fu Tseng
Guo-Fu Tseng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12282390Abstract: Distributed journaling for write operations to RAID systems is disclosed, including: receiving a new write operation to a plurality of storage devices associated with a redundant array of independent disks (RAID) group, wherein the plurality of storage devices comprises a main data storage and a non-volatile journal storage; writing a record of the new write operation to the non-volatile journal storage; after the record of the new write operation is written to the non-volatile journal storage, writing new data associated with the new write operation to the main data storage; and after the new data associated with the new write operation is written to the main data storage, invalidating the record of the new write operation in the non-volatile journal storage, wherein upon restarting the plurality of storage devices associated with the RAID group, the non-volatile journal storage is checked and valid records of one or more write operations included in the non-volatile journal storage are written to the main dType: GrantFiled: July 26, 2024Date of Patent: April 22, 2025Assignee: GRAID Technology Inc.Inventors: Guo-Fu Tseng, Jin-Jhang Lee, Bo-Yi Sung, Po-Ting Liu, Cheng-Yue Chang
-
Patent number: 12079521Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: GrantFiled: February 24, 2023Date of Patent: September 3, 2024Assignee: GRAID Technology Inc.Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Publication number: 20230221899Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: ApplicationFiled: February 24, 2023Publication date: July 13, 2023Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Patent number: 11662955Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: GrantFiled: September 27, 2021Date of Patent: May 30, 2023Assignee: GRAID Technology Inc.Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Publication number: 20230104509Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: ApplicationFiled: September 27, 2021Publication date: April 6, 2023Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Patent number: 11561871Abstract: A data transmission and protection system includes a plurality of solid-state drives (SSDs), a storage medium, a central processing unit (CPU) and a massively parallel processor (MPP). The storage medium storing an application program and a redundant array of independent disks (RAID) configuration. The CPU is coupled to the storage medium and configured to execute the application program to generate a virtual SSD interface for the plurality of SSDs according to the RAID configuration. The MPP is coupled to the virtual SSD interface and the plurality of SSDs. The MPP is configured to execute data exchange with the plurality of SSDs in response to a command received from the virtual SSD interface.Type: GrantFiled: August 10, 2020Date of Patent: January 24, 2023Assignee: GRAID Technology Inc.Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Guo-Fu Tseng
-
Patent number: 11416403Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: GrantFiled: February 4, 2021Date of Patent: August 16, 2022Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 11288197Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: GrantFiled: November 29, 2020Date of Patent: March 29, 2022Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20210191642Abstract: A data transmission and protection system includes a plurality of solid-state drives (SSDs), a storage medium, a central processing unit (CPU) and a massively parallel processor (MPP). The storage medium storing an application program and a redundant array of independent disks (RAID) configuration. The CPU is coupled to the storage medium and configured to execute the application program to generate a virtual SSD interface for the plurality of SSDs according to the RAID configuration. The MPP is coupled to the virtual SSD interface and the plurality of SSDs. The MPP is configured to execute data exchange with the plurality of SSDs in response to a command received from the virtual SSD interface.Type: ApplicationFiled: August 10, 2020Publication date: June 24, 2021Inventors: TSUNG-LIN YU, CHENG-YUE CHANG, GUO-FU TSENG
-
Publication number: 20210157729Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: ApplicationFiled: February 4, 2021Publication date: May 27, 2021Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 10963385Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: GrantFiled: September 25, 2019Date of Patent: March 30, 2021Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20210081321Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: ApplicationFiled: November 29, 2020Publication date: March 18, 2021Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 10884933Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: GrantFiled: September 25, 2019Date of Patent: January 5, 2021Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20200233609Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: ApplicationFiled: September 25, 2019Publication date: July 23, 2020Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20200233805Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: ApplicationFiled: September 25, 2019Publication date: July 23, 2020Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu