Patents by Inventor Cheng-Yue Chang
Cheng-Yue Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230221899Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: ApplicationFiled: February 24, 2023Publication date: July 13, 2023Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Patent number: 11662955Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: GrantFiled: September 27, 2021Date of Patent: May 30, 2023Assignee: GRAID Technology Inc.Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Publication number: 20230104509Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.Type: ApplicationFiled: September 27, 2021Publication date: April 6, 2023Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
-
Patent number: 11561871Abstract: A data transmission and protection system includes a plurality of solid-state drives (SSDs), a storage medium, a central processing unit (CPU) and a massively parallel processor (MPP). The storage medium storing an application program and a redundant array of independent disks (RAID) configuration. The CPU is coupled to the storage medium and configured to execute the application program to generate a virtual SSD interface for the plurality of SSDs according to the RAID configuration. The MPP is coupled to the virtual SSD interface and the plurality of SSDs. The MPP is configured to execute data exchange with the plurality of SSDs in response to a command received from the virtual SSD interface.Type: GrantFiled: August 10, 2020Date of Patent: January 24, 2023Assignee: GRAID Technology Inc.Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Guo-Fu Tseng
-
Patent number: 11416403Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: GrantFiled: February 4, 2021Date of Patent: August 16, 2022Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 11288197Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: GrantFiled: November 29, 2020Date of Patent: March 29, 2022Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 11249670Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.Type: GrantFiled: July 21, 2020Date of Patent: February 15, 2022Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
-
Patent number: 11073988Abstract: A device and a method for virtual storage are provided. The device includes a physical processor, a hypervisor and a physical storage. The hypervisor is executed on the physical processor and configured to create at least one client virtual machine and a controller virtual machine. The physical storage is clustered with physical storage of at least another device via the controller virtual machine to form a storage cluster. The controller virtual machine is further configured to define a virtual storage pool in the storage cluster and create at least one virtual storage controller virtual machine to interface the at least one client virtual machine with the virtual storage pool so that the at least one client virtual machine accesses the virtual storage pool via the at least one virtual storage controller virtual machine and the controller virtual machine. The method is applied to the device to implement the operations.Type: GrantFiled: December 12, 2019Date of Patent: July 27, 2021Assignee: SILICON MOTION TECHNOLOGY (HONG KONG) LTD.Inventors: Cheng-Yue Chang, Jian-Ying Chen, Yung-Hua Chu, Kuan-Kai Chiu, Po-Hsun Yen, Tsung-Lin Yu, Ming-Xun Zhong
-
Publication number: 20210191642Abstract: A data transmission and protection system includes a plurality of solid-state drives (SSDs), a storage medium, a central processing unit (CPU) and a massively parallel processor (MPP). The storage medium storing an application program and a redundant array of independent disks (RAID) configuration. The CPU is coupled to the storage medium and configured to execute the application program to generate a virtual SSD interface for the plurality of SSDs according to the RAID configuration. The MPP is coupled to the virtual SSD interface and the plurality of SSDs. The MPP is configured to execute data exchange with the plurality of SSDs in response to a command received from the virtual SSD interface.Type: ApplicationFiled: August 10, 2020Publication date: June 24, 2021Inventors: TSUNG-LIN YU, CHENG-YUE CHANG, GUO-FU TSENG
-
Publication number: 20210157729Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: ApplicationFiled: February 4, 2021Publication date: May 27, 2021Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 10963385Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: GrantFiled: September 25, 2019Date of Patent: March 30, 2021Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20210081321Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: ApplicationFiled: November 29, 2020Publication date: March 18, 2021Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Patent number: 10884933Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: GrantFiled: September 25, 2019Date of Patent: January 5, 2021Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20200348877Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.Type: ApplicationFiled: July 21, 2020Publication date: November 5, 2020Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
-
Patent number: 10789005Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.Type: GrantFiled: January 29, 2019Date of Patent: September 29, 2020Assignee: Silicon Motion Technology (Hong Kong) LimitedInventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
-
Publication number: 20200233609Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.Type: ApplicationFiled: September 25, 2019Publication date: July 23, 2020Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20200233805Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.Type: ApplicationFiled: September 25, 2019Publication date: July 23, 2020Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
-
Publication number: 20200225865Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.Type: ApplicationFiled: January 29, 2019Publication date: July 16, 2020Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
-
Publication number: 20200117364Abstract: A device and a method for virtual storage are provided. The device includes a physical processor, a hypervisor and a physical storage. The hypervisor is executed on the physical processor and configured to create at least one client virtual machine and a controller virtual machine. The physical storage is clustered with physical storage of at least another device via the controller virtual machine to form a storage cluster. The controller virtual machine is further configured to define a virtual storage pool in the storage cluster and create at least one virtual storage controller virtual machine to interface the at least one client virtual machine with the virtual storage pool so that the at least one client virtual machine accesses the virtual storage pool via the at least one virtual storage controller virtual machine and the controller virtual machine. The method is applied to the device to implement the operations.Type: ApplicationFiled: December 12, 2019Publication date: April 16, 2020Inventors: Cheng-Yue Chang, Jian-Ying Chen, Yung-Hua Chu, Kuan-Kai Chiu, Po-Hsun Yen, Tsung-Lin Yu, Ming-Xun Zhong
-
Patent number: 10558359Abstract: A device and a method for virtual storage are provided. The device includes a physical processor, a hypervisor and a physical storage. The hypervisor is executed on the physical processor and configured to create at least one client virtual machine and a controller virtual machine. The physical storage is clustered with physical storage of at least another device via the controller virtual machine to form a storage cluster. The controller virtual machine is further configured to define a virtual storage pool in the storage cluster and create at least one virtual storage controller virtual machine to interface the at least one client virtual machine with the virtual storage pool so that the at least one client virtual machine accesses the virtual storage pool via the at least one virtual storage controller virtual machine and the controller virtual machine. The method is applied to the device to implement the operations.Type: GrantFiled: November 10, 2014Date of Patent: February 11, 2020Assignee: SILICON MOTION TECHNOLOGY (HONG KONG) LTD.Inventors: Cheng-Yue Chang, Jian-Ying Chen, Yung-Hua Chu, Kuan-Kai Chiu, Po-Hsun Yen, Tsung-Lin Yu, Ming-Xun Zhong