Patents by Inventor Zhanghao Hu
Zhanghao Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240211813Abstract: A method includes receiving a set of training data and selecting a first machine learning platform based on a first optimization function that metrics past machine learning platforms used for training on the set of training data. The method also includes selecting a first algorithm supported by the first machine learning platform based on a second optimization function that metrics past algorithms used for training on the set of training data. Further, the method includes determining one or more hyperparameters supported by the first algorithm based on a third optimization function that metrics past combinations of hyperparameters from the set of hyperparameters used for training on the set of training data. The method also includes training a machine learning model on the set of training data using the first machine learning platform, the first algorithm, and the one or more hyperparameters.Type: ApplicationFiled: December 29, 2023Publication date: June 27, 2024Inventors: Lichao Liu, Xuyao Hao, Zhanghao Hu
-
Patent number: 11900231Abstract: A method includes receiving a set of training data and selecting a first machine learning platform based on a first optimization function that metrics past machine learning platforms used for training on the set of training data. The method also includes selecting a first algorithm supported by the first machine learning platform based on a second optimization function that metrics past algorithms used for training on the set of training data. Further, the method includes determining one or more hyperparameters supported by the first algorithm based on a third optimization function that metrics past combinations of hyperparameters from the set of hyperparameters used for training on the set of training data. The method also includes training a machine learning model on the set of training data using the first machine learning platform, the first algorithm, and the one or more hyperparameters.Type: GrantFiled: December 31, 2019Date of Patent: February 13, 2024Assignee: PAYPAL, INC.Inventors: Lichao Liu, Xuyao Hao, Zhanghao Hu
-
Patent number: 11893465Abstract: Methods and systems are presented for generating a machine learning model using enhanced gradient boosting techniques. The machine learning model is configured to receive inputs corresponding to a set of features and to produce an output based on the inputs. The machine learning model includes multiple layers, wherein each layer includes multiple models. To generate the machine learning model, multiple models are built and trained in parallel for each layer of the machine learning model. The multiple models use different subsets of features to produce corresponding output values. After a layer in built and trained, a collective error may be determined for the layer based on the output values from the different models in the layer. An additional layer of models may be added to the machine learning model to reduce the collective error of a previous layer.Type: GrantFiled: January 27, 2023Date of Patent: February 6, 2024Assignee: PayPal, Inc.Inventors: Zhanghao Hu, Fangbo Tu, Xuyao Hao, Yanzan Zhou
-
Publication number: 20230186164Abstract: Methods and systems are presented for generating a machine learning model using enhanced gradient boosting techniques. The machine learning model is configured to receive inputs corresponding to a set of features and to produce an output based on the inputs. The machine learning model includes multiple layers, wherein each layer includes multiple models. To generate the machine learning model, multiple models are built and trained in parallel for each layer of the machine learning model. The multiple models use different subsets of features to produce corresponding output values. After a layer in built and trained, a collective error may be determined for the layer based on the output values from the different models in the layer. An additional layer of models may be added to the machine learning model to reduce the collective error of a previous layer.Type: ApplicationFiled: January 27, 2023Publication date: June 15, 2023Inventors: Zhanghao Hu, Fangbo Tu, Xuyao Hao, Yanzan Zhou
-
Patent number: 11615347Abstract: A method includes training a first machine learning model based on a set of training data and based on the training, determining a first performance metric corresponding to the first machine learning model. The method also includes determining one or more past performance metrics corresponding to one or more machine learning models that were previously trained based on the set of training data. Based on the first performance metric and the one or more past performance metrics, the method includes automatically selecting a second machine learning model to train based on the set of training data.Type: GrantFiled: December 31, 2019Date of Patent: March 28, 2023Assignee: PAYPAL, INC.Inventors: Lichao Liu, Xuyao Hao, Zhanghao Hu
-
Patent number: 11568317Abstract: Methods and systems are presented for generating a machine learning model using enhanced gradient boosting techniques. The machine learning model is configured to receive inputs corresponding to a set of features and to produce an output based on the inputs. The machine learning model includes multiple layers, wherein each layer includes multiple models. To generate the machine learning model, multiple models are built and trained in parallel for each layer of the machine learning model. The multiple models use different subsets of features to produce corresponding output values. After a layer in built and trained, a collective error may be determined for the layer based on the output values from the different models in the layer. An additional layer of models may be added to the machine learning model to reduce the collective error of a previous layer.Type: GrantFiled: May 21, 2020Date of Patent: January 31, 2023Assignee: PayPal, Inc.Inventors: Zhanghao Hu, Fangbo Tu, Xuyao Hao, Yanzan Zhou
-
Publication number: 20210365832Abstract: Methods and systems are presented for generating a machine learning model using enhanced gradient boosting techniques. The machine learning model is configured to receive inputs corresponding to a set of features and to produce an output based on the inputs. The machine learning model includes multiple layers, wherein each layer includes multiple models. To generate the machine learning model, multiple models are built and trained in parallel for each layer of the machine learning model. The multiple models use different subsets of features to produce corresponding output values. After a layer in built and trained, a collective error may be determined for the layer based on the output values from the different models in the layer. An additional layer of models may be added to the machine learning model to reduce the collective error of a previous layer.Type: ApplicationFiled: May 21, 2020Publication date: November 25, 2021Inventors: Zhanghao Hu, Fangbo Tu, Xuyao Hao, Yanzan Zhou
-
Publication number: 20210201206Abstract: A method includes training a first machine learning model based on a set of training data and based on the training, determining a first performance metric corresponding to the first machine learning model. The method also includes determining one or more past performance metrics corresponding to one or more machine learning models that were previously trained based on the set of training data. Based on the first performance metric and the one or more past performance metrics, the method includes automatically selecting a second machine learning model to train based on the set of training data.Type: ApplicationFiled: December 31, 2019Publication date: July 1, 2021Inventors: Lichao Liu, Xuyao Hao, Zhanghao Hu
-
Publication number: 20210201207Abstract: A method includes receiving a set of training data and selecting a first machine learning platform based on a first optimization function that metrics past machine learning platforms used for training on the set of training data. The method also includes selecting a first algorithm supported by the first machine learning platform based on a second optimization function that metrics past algorithms used for training on the set of training data. Further, the method includes determining one or more hyperparameters supported by the first algorithm based on a third optimization function that metrics past combinations of hyperparameters from the set of hyperparameters used for training on the set of training data. The method also includes training a machine learning model on the set of training data using the first machine learning platform, the first algorithm, and the one or more hyperparameters.Type: ApplicationFiled: December 31, 2019Publication date: July 1, 2021Inventors: Lichao Liu, Xuyao Hao, Zhanghao Hu
-
Publication number: 20200311614Abstract: Machine learning often uses ensemble classifiers, such as random forest or gradient boosting tree classifiers to solve problems. One issue with such classifiers is that they may be prone to data overfitting. This can cause the classifier to perform relatively worse when dealing with data outside of a training set. One technique to avoiding overfitting is using random dropout on decision trees in the ensemble classifier (e.g. drop three percent of all decision trees to create a final classifier). However, random dropout can be improved upon. Penalty based dropout can assess the performance of individual trees using a validation data set (which may be separate from the training set). Instead of using random dropout, some of the worst performing trees can be dropped instead, leading to better overall performance.Type: ApplicationFiled: March 29, 2019Publication date: October 1, 2020Inventors: Lichao Liu, Zhanghao Hu