Patents by Inventor Jerry Chi-Yuan CHOU

Jerry Chi-Yuan CHOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230145177
    Abstract: A federated learning method and a federated learning system based on mediation process are provided. The federated learning method includes: dividing a plurality of client devices into a plurality of mediator groups, and generate a plurality of mediator modules; configuring a server device to broadcast initial model weight data to the plurality of mediator modules; configuring the plurality of mediator modules to execute a sequential training process for the plurality of mediator groups to train a target model and generate trained model weight data; configuring the server device to execute a weighted federated averaging algorithm to generate global model weight data; and configuring the server device to set the target model with the global model weight data to generate a global target model.
    Type: Application
    Filed: November 25, 2021
    Publication date: May 11, 2023
    Inventors: PING-FENG WANG, CHIUN-SHENG HSU, JERRY CHI-YUAN CHOU
  • Publication number: 20220101195
    Abstract: A machine learning system includes a host device and several client devices. The client devices receive a host model from the host device, respectively, and include a first and a second client devices. The first and the second client devices store a first and a second parameter sets, respectively, and perform training on the received host models according to the first and the second parameter sets, respectively, to respectively generate a first and a second training results. If the host device has received the first training result corresponding to an m-th training round but has not received the second training result corresponding to a n-th round training, when a difference between m and n is not higher than a threshold value, the host device updates the host model according to the first training result without using the second training result.
    Type: Application
    Filed: November 9, 2020
    Publication date: March 31, 2022
    Inventors: Ping-Feng WANG, Jerry Chi-Yuan CHOU, Keng-Jui HSU, Chiun-Sheng HSU
  • Patent number: 10460241
    Abstract: A server and a cloud computing resource optimization method thereof for big data cloud computing architecture are provided. The server runs a dynamic scaling system to perform the following operations: receiving a task message; executing a profiling procedure to generate a profile based on an to-be-executed task recorded in the task message; executing a classifying procedure to determine a task classification of the to-be-executed task; executing a prediction procedure to obtain a plurality of predicted execution times corresponding to a plurality of computing node numbers, a computing node type and a system parameter of the to-be-executed task; executing an optimization procedure to determine a practical computing node number of the to-be-executed task; and transmitting an optimization output message to a management server to make the management server allocate at least one data computing system to execute a program file of the to-be-executed task.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: October 29, 2019
    Assignee: Institute For Information Industry
    Inventors: Jerry Chi-Yuan Chou, Shih-Yu Lu, Chen-Chun Chen, Chan-Yi Lin, Hsin-Tse Lu
  • Publication number: 20180144251
    Abstract: A server and a cloud computing resource optimization method thereof for cloud big data computing architecture are provided. The server runs a dynamic scaling system to perform the following operations: receiving a task message; executing a profiling procedure to generate a profile based on an to-be-executed task recorded in the task message; executing a classifying procedure to determine a task classification of the to-be-executed task; executing a prediction procedure to obtain a plurality of predicted execution times corresponding to a plurality of computing node numbers, a computing node type and a system parameter of the to-be-executed task; executing an optimization procedure to determine a practical computing node number of the to-be-executed task; and transmitting an optimization output message to a management server to make the management server allocate at least one data computing system to execute a program file of the to-be-executed task.
    Type: Application
    Filed: December 7, 2016
    Publication date: May 24, 2018
    Inventors: Jerry Chi-Yuan CHOU, Shih-Yu LU, Chen-Chun CHEN, Chan-Yi LIN, Hsin-Tse LU