Patents by Inventor Beichen MA

Beichen MA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240037410
    Abstract: A method for model aggregation in federated learning (FL), a server, a device, and a storage medium are suggested, which relate to the field of artificial intelligence (AI) technologies such as machine learning. A specific implementation solution involves: acquiring a data not identically and independently distributed (Non-IID) degree value of each of a plurality of edge devices participating in FL; acquiring local models uploaded by the edge devices; and performing aggregation based on the data Non-IID degree values of the edge devices and the local models uploaded by the edge devices to obtain a global model.
    Type: Application
    Filed: February 13, 2023
    Publication date: February 1, 2024
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ji LIU, Beichen MA, Dejing DOU
  • Publication number: 20230206075
    Abstract: A method for distributing network layers in a neural network model includes: acquiring a to-be-processed neural network model and a computing device set; generating a target number of distribution schemes according to network layers in the to-be-processed neural network model and computing devices in the computing device set, the distribution schemes including corresponding relationships between the network layers and the computing devices; according to device types of the computing devices, combining the network layers corresponding to the same device type in each distribution scheme into one stage, to obtain a combination result of each distribution scheme; obtaining an adaptive value of each distribution scheme according to the combination result of each distribution scheme; and determining a target distribution scheme from the distribution schemes according to respective adaptive value, and taking the target distribution scheme as a distribution result of the network layers in the to-be-processed neural n
    Type: Application
    Filed: November 21, 2022
    Publication date: June 29, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ji LIU, Zhihua WU, Danlei FENG, Minxu ZHANG, Xinxuan WU, Xuefeng YAO, Beichen MA, Dejing DOU, Dianhai YU, Yanjun MA
  • Publication number: 20220391780
    Abstract: The present disclosure provides a method of federated learning. A specific implementation solution includes: determining, for a current learning period, a target device for each task of at least one learning task to be performed, from a plurality of candidate devices according to a plurality of resource information of the plurality of candidate devices; transmitting a global model for the each task to the target device for the each task, so that the target device for the each task trains the global model for the each task; and updating, in response to receiving trained models from all target devices for the each task, the global model for the each task according to the trained models, so as to complete the current learning period. The present disclosure further provides an electronic device, and a storage medium.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 8, 2022
    Inventors: Ji LIU, Beichen MA, Chendi ZHOU, Juncheng JIA, Dejing DOU, Shilei JI, Yuan LIAO
  • Publication number: 20220374775
    Abstract: A method for multi-task scheduling, a device and a storage medium are provided. The method may include: initializing a list of candidate scheduling schemes, the candidate scheduling scheme being used to allocate a terminal device for training to each machine learning task in a plurality of machine learning tasks; perturbing, for each candidate scheduling scheme in the list of candidate scheduling schemes, the candidate scheduling scheme to generate a new scheduling scheme; determining whether to replace the candidate scheduling scheme with the new scheduling scheme based on a fitness value of the candidate scheduling scheme and a fitness value of the new scheduling scheme, to generate a new scheduling scheme list; and determining a target scheduling scheme, based on the fitness value of each new scheduling scheme in the new scheduling scheme list.
    Type: Application
    Filed: July 18, 2022
    Publication date: November 24, 2022
    Inventors: Ji LIU, Beichen MA, Jingbo ZHOU, Ruipu ZHOU, Dejing DOU
  • Publication number: 20220374776
    Abstract: The present disclosure provides a method and apparatus for federated learning, which relate to the technical fields such as big data and deep learning. A specific implementation is: generating, for each task in a plurality of different tasks trained simultaneously, a global model for each task; receiving resource information of each available terminal in a current available terminal set; selecting a target terminal corresponding to each task from the current available terminal set, based on the resource information and the global model; and training the global model using the target terminal until a trained global model for each task meets a preset condition.
    Type: Application
    Filed: July 19, 2022
    Publication date: November 24, 2022
    Inventors: Ji LIU, Beichen MA, Chendi ZHOU, Jingbo ZHOU, Ruipu ZHOU, Dejing DOU