Patents by Inventor Cheng-Yue Chang

Cheng-Yue Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230221899
    Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.
    Type: Application
    Filed: February 24, 2023
    Publication date: July 13, 2023
    Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
  • Patent number: 11662955
    Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: May 30, 2023
    Assignee: GRAID Technology Inc.
    Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
  • Publication number: 20230104509
    Abstract: Direct memory access data path for RAID storage is disclosed, including: receiving, at a Redundant Array of Independent Disks (RAID) controller, a request to write data to be distributed among a plurality of storage devices; computing parity information based at least in part on the data associated with the request; causing the parity information to be stored on a first subset of the plurality of storage devices; and causing the data associated with the request to be stored on a second subset of the plurality of storage devices, wherein the plurality of storage devices is configured to obtain the data associated with the request directly from a memory that is remote to the RAID controller, and wherein the data associated with the request does not pass through the RAID controller.
    Type: Application
    Filed: September 27, 2021
    Publication date: April 6, 2023
    Inventors: Guo-Fu Tseng, Tsung-Lin Yu, Cheng-Yue Chang
  • Patent number: 11561871
    Abstract: A data transmission and protection system includes a plurality of solid-state drives (SSDs), a storage medium, a central processing unit (CPU) and a massively parallel processor (MPP). The storage medium storing an application program and a redundant array of independent disks (RAID) configuration. The CPU is coupled to the storage medium and configured to execute the application program to generate a virtual SSD interface for the plurality of SSDs according to the RAID configuration. The MPP is coupled to the virtual SSD interface and the plurality of SSDs. The MPP is configured to execute data exchange with the plurality of SSDs in response to a command received from the virtual SSD interface.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: January 24, 2023
    Assignee: GRAID Technology Inc.
    Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Guo-Fu Tseng
  • Patent number: 11416403
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: August 16, 2022
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Patent number: 11288197
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.
    Type: Grant
    Filed: November 29, 2020
    Date of Patent: March 29, 2022
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Patent number: 11249670
    Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: February 15, 2022
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
  • Patent number: 11073988
    Abstract: A device and a method for virtual storage are provided. The device includes a physical processor, a hypervisor and a physical storage. The hypervisor is executed on the physical processor and configured to create at least one client virtual machine and a controller virtual machine. The physical storage is clustered with physical storage of at least another device via the controller virtual machine to form a storage cluster. The controller virtual machine is further configured to define a virtual storage pool in the storage cluster and create at least one virtual storage controller virtual machine to interface the at least one client virtual machine with the virtual storage pool so that the at least one client virtual machine accesses the virtual storage pool via the at least one virtual storage controller virtual machine and the controller virtual machine. The method is applied to the device to implement the operations.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: July 27, 2021
    Assignee: SILICON MOTION TECHNOLOGY (HONG KONG) LTD.
    Inventors: Cheng-Yue Chang, Jian-Ying Chen, Yung-Hua Chu, Kuan-Kai Chiu, Po-Hsun Yen, Tsung-Lin Yu, Ming-Xun Zhong
  • Publication number: 20210191642
    Abstract: A data transmission and protection system includes a plurality of solid-state drives (SSDs), a storage medium, a central processing unit (CPU) and a massively parallel processor (MPP). The storage medium storing an application program and a redundant array of independent disks (RAID) configuration. The CPU is coupled to the storage medium and configured to execute the application program to generate a virtual SSD interface for the plurality of SSDs according to the RAID configuration. The MPP is coupled to the virtual SSD interface and the plurality of SSDs. The MPP is configured to execute data exchange with the plurality of SSDs in response to a command received from the virtual SSD interface.
    Type: Application
    Filed: August 10, 2020
    Publication date: June 24, 2021
    Inventors: TSUNG-LIN YU, CHENG-YUE CHANG, GUO-FU TSENG
  • Publication number: 20210157729
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.
    Type: Application
    Filed: February 4, 2021
    Publication date: May 27, 2021
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Patent number: 10963385
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: March 30, 2021
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Publication number: 20210081321
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.
    Type: Application
    Filed: November 29, 2020
    Publication date: March 18, 2021
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Patent number: 10884933
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: January 5, 2021
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Publication number: 20200348877
    Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.
    Type: Application
    Filed: July 21, 2020
    Publication date: November 5, 2020
    Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
  • Patent number: 10789005
    Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: September 29, 2020
    Assignee: Silicon Motion Technology (Hong Kong) Limited
    Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
  • Publication number: 20200233609
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to input metadata corresponding to the user data into at least one pipeline within the pipeline architecture; and utilizing the host device to cache the metadata with a first cache module of the pipeline, for controlling the storage server completing the request without generating write amplification of the metadata, wherein the first cache module is a hardware pipeline module outside the storage device layer.
    Type: Application
    Filed: September 25, 2019
    Publication date: July 23, 2020
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Publication number: 20200233805
    Abstract: A method for performing pipeline-based accessing management in a storage server and associated apparatus are provided. The method includes: in response to a request of writing user data into the storage server, utilizing a host device within the storage server to write the user data into a storage device layer of the storage server and start processing an object write command corresponding to the request of writing the user data with a pipeline architecture of the storage server; utilizing the host device to select fixed size buffer pool from a plurality of fixed size buffer pools; utilizing the host device to allocate a buffer from the fixed size buffer pool to be a pipeline module of at least one pipeline within the pipeline architecture, for performing buffering for the at least one pipeline; and utilizing the host device to write metadata corresponding to the user data into the allocated buffer.
    Type: Application
    Filed: September 25, 2019
    Publication date: July 23, 2020
    Inventors: Guo-Fu Tseng, Cheng-Yue Chang, Kuan-Kai Chiu
  • Publication number: 20200225865
    Abstract: The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.
    Type: Application
    Filed: January 29, 2019
    Publication date: July 16, 2020
    Inventors: Tsung-Lin Yu, Cheng-Yue Chang, Po-Hsun Yen
  • Publication number: 20200117364
    Abstract: A device and a method for virtual storage are provided. The device includes a physical processor, a hypervisor and a physical storage. The hypervisor is executed on the physical processor and configured to create at least one client virtual machine and a controller virtual machine. The physical storage is clustered with physical storage of at least another device via the controller virtual machine to form a storage cluster. The controller virtual machine is further configured to define a virtual storage pool in the storage cluster and create at least one virtual storage controller virtual machine to interface the at least one client virtual machine with the virtual storage pool so that the at least one client virtual machine accesses the virtual storage pool via the at least one virtual storage controller virtual machine and the controller virtual machine. The method is applied to the device to implement the operations.
    Type: Application
    Filed: December 12, 2019
    Publication date: April 16, 2020
    Inventors: Cheng-Yue Chang, Jian-Ying Chen, Yung-Hua Chu, Kuan-Kai Chiu, Po-Hsun Yen, Tsung-Lin Yu, Ming-Xun Zhong
  • Patent number: 10558359
    Abstract: A device and a method for virtual storage are provided. The device includes a physical processor, a hypervisor and a physical storage. The hypervisor is executed on the physical processor and configured to create at least one client virtual machine and a controller virtual machine. The physical storage is clustered with physical storage of at least another device via the controller virtual machine to form a storage cluster. The controller virtual machine is further configured to define a virtual storage pool in the storage cluster and create at least one virtual storage controller virtual machine to interface the at least one client virtual machine with the virtual storage pool so that the at least one client virtual machine accesses the virtual storage pool via the at least one virtual storage controller virtual machine and the controller virtual machine. The method is applied to the device to implement the operations.
    Type: Grant
    Filed: November 10, 2014
    Date of Patent: February 11, 2020
    Assignee: SILICON MOTION TECHNOLOGY (HONG KONG) LTD.
    Inventors: Cheng-Yue Chang, Jian-Ying Chen, Yung-Hua Chu, Kuan-Kai Chiu, Po-Hsun Yen, Tsung-Lin Yu, Ming-Xun Zhong