Patents by Inventor Yunfeng Shao

Yunfeng Shao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250225405
    Abstract: This application discloses an action prediction method and a related device therefor, and provides a new action prediction manner. One example method in this application includes: After obtaining state information indicating that a first agent and a second agent are in a first state, the first agent may process the state information by using a generative flow model, to obtain occurrence probabilities of N actions of the first agent. An ith action in the N actions is used to enable the first agent and the second agent to enter an ith second state from the first state. In this way, the first agent completes action prediction for the first state.
    Type: Application
    Filed: March 27, 2025
    Publication date: July 10, 2025
    Inventors: Yinchuan LI, Yunfeng SHAO
  • Publication number: 20250225442
    Abstract: A method for training a generative flow network is provided, and is applied to the field of artificial intelligence technologies. In the method, in a process of training the generative flow network, for any state of an agent, a plurality of first actions performed in the state and a plurality of second actions that can be transferred to the state are selected from a continuous action space in a sampling manner, then, predicted values corresponding to the plurality of first actions and the plurality of second actions are output by using the generative flow network, and further, a loss function used to update the generative flow network is obtained through calculation. In this solution, a plurality of actions obtained through sampling are used to approximately represent the continuous action space, and then, the generative flow network is trained.
    Type: Application
    Filed: March 25, 2025
    Publication date: July 10, 2025
    Inventors: Yinchuan LI, Yunfeng SHAO
  • Publication number: 20250156726
    Abstract: A federated learning method includes a central node that separately sends a first model to at least one central edge device, receives at least one second model, and aggregates the at least one second model to obtain a fourth model. The at least one central edge device is in one-to-one correspondence with at least one edge device group. The second model is obtained by aggregating a third model respectively obtained by each edge device in at least one edge device group. The third model is obtained by one edge device in collaboration with at least one terminal device in a coverage area through learning the first model based on local data. The edge devices are grouped into edge device groups, and a central edge device in one edge device group sends the first model to each edge device in the edge device group.
    Type: Application
    Filed: January 15, 2025
    Publication date: May 15, 2025
    Inventors: Yunfeng Shao, Bingshuai Li, Jiaxun Lu, Zhenzhe Zheng, Fan Wu, Dahai Hu
  • Publication number: 20250094822
    Abstract: This application discloses a tensor-based continual learning method and apparatus. The method includes: obtaining input data; and inputting the input data into a first neural network to obtain a data processing result. After training of an ith task ends, the neural network includes A tensor cores, the A tensor cores are divided into B tensor layers, and each of the B tensor layers includes data of all of the A tensor cores in a same dimension. In training of an (i+1)th task, C tensor cores and/or D tensor layers are added to the first neural network, and parameters in the C tensor cores and/or parameters at the D tensor layers are updated. According to this application, an anti-forgetting capability of a model can be effectively improved, and an increase in a scale of the model is small, to effectively reduce storage and communication overheads.
    Type: Application
    Filed: November 29, 2024
    Publication date: March 20, 2025
    Inventors: Yinchuan Li, Yunfeng Shao
  • Publication number: 20250068921
    Abstract: A causality determining method relates to the field of artificial intelligence. The method includes: obtaining first information that is obtained by predicting a plurality of variables by a generative flow model and that indicates causality between the plurality of variables; and predicting second information of the plurality of variables based on the first information and by using the generative flow model, where the second information indicates that first causality exists between a first variable and a second variable in the plurality of variables, and the first information indicates that the first causality does not exist between the first variable and the second variable. This reduces computing capability overheads and improves a convergence speed of the model.
    Type: Application
    Filed: November 12, 2024
    Publication date: February 27, 2025
    Inventors: Yinchuan LI, Yunfeng SHAO, Wenqian LI
  • Publication number: 20240394556
    Abstract: A machine learning model training method, a service data processing method, and an apparatus are provided, which are applied to the artificial intelligence field. In a training phase, a cloud server sends a machine learning submodel to an edge server. The edge server performs federated learning with client devices in a management domain of the edge server based on the obtained machine learning submodel, to obtain a trained machine learning submodel, and sends the trained machine learning submodel to the cloud server. The cloud server fuses obtained different trained machine learning submodels, to obtain a machine learning model. According to this application, training efficiency of the machine learning model can be improved. In an inference phase, the client device processes service data by using the trained machine learning submodel. According to this application, prediction efficiency of the machine learning model can be improved.
    Type: Application
    Filed: August 5, 2024
    Publication date: November 28, 2024
    Inventors: Yunfeng Shao, Kaiyang Guo, Jun Wu
  • Publication number: 20240362361
    Abstract: This disclosure provides a user data processing system. A first data processing device in the system generates a first intermediate result, and sends a third intermediate result to a second data processing device. The third intermediate result is obtained from the first intermediate result based on a parameter of a first machine learning model and target historical user data obtained by the first data processing device, and an identifier of the target historical user data is the same as an identifier of historical user data of the second data processing device. The first data processing device further receives a second intermediate result, and updates the parameter of the first machine learning model based on the first intermediate result and the second intermediate result. The second data processing device further updates a parameter of a second machine learning model based on the received third intermediate result and the second intermediate result.
    Type: Application
    Filed: July 4, 2024
    Publication date: October 31, 2024
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yunfeng Shao, Bingshuai Li
  • Publication number: 20240255949
    Abstract: An electronic data processor, a reference frame converter, or a transformer is configured to transform or converting the second position in the PPP reference frame into a transformed second position in a universal reference frame. The electronic data processor is configured to assign an identifier to the first position observation and the second position observation, where the identifier indicates whether the observations are associated with a guidance path or a boundary of a field or work area.
    Type: Application
    Filed: August 1, 2023
    Publication date: August 1, 2024
    Inventors: Liwen Dai, Yunfeng Shao, Marlon W. Bright
  • Publication number: 20240211816
    Abstract: A method includes a server delivering a random quantization instruction to a plurality of terminals. The plurality of terminals perform random quantization on training update data based on the random quantization instruction and upload, to the server, training update data on which random quantization has been performed. After aggregating the training update data on which random quantization has been performed, the server may eliminate an additional quantization error introduced by random quantization.
    Type: Application
    Filed: March 6, 2024
    Publication date: June 27, 2024
    Inventors: Yinchuan Li, Yunfeng Shao, Jun Wu
  • Publication number: 20240086720
    Abstract: This application provides a federated learning method, apparatus, and system, so that a server retrains a received model in a federated learning process to implement depersonalization processing to some extent, to obtain a model with higher output precision. The method includes: First, a first server receives information about at least one first model sent by at least one downstream device, where the at least one downstream device may include another server or a client connected to the first server; the first server trains the at least one first model to obtain at least one trained first model; and then the first server aggregates the at least one trained first model, and updates a locally stored second model by using an aggregation result, to obtain an updated second model.
    Type: Application
    Filed: November 24, 2023
    Publication date: March 14, 2024
    Inventors: Yinchuan LI, Yunfeng SHAO, Li QIAN
  • Publication number: 20230353347
    Abstract: A first apparatus provides a second apparatus with encrypted label distribution information for the first node, so that the second apparatus calculates an intermediate parameter of a segmentation policy of the second apparatus side based on the encrypted label distribution information, and therefore a gain of the segmentation policy of the second apparatus side can be obtained. A preferred segmentation policy of the first node can also be obtained based on the gain of the segmentation policy of the second apparatus side and a gain of a segmentation policy of the first apparatus side. The encrypted label distribution information includes label data and distribution information, and is in a ciphertext state. The encrypted label distribution information can be used to determine the gain of the segmentation policy without leaking a distribution status of a sample set on the first node.
    Type: Application
    Filed: June 29, 2023
    Publication date: November 2, 2023
    Inventors: Yunfeng Shao, Bingshuai Li, Haibo Tian
  • Publication number: 20230342669
    Abstract: Embodiments of this application provide a machine learning model update method, applied to the field of artificial intelligence. The method includes: A first apparatus generates a first intermediate result based on a first data subset. The first apparatus receives an encrypted second intermediate result sent by a second apparatus, where the second intermediate result is generated based on a second data subset corresponding to the second apparatus. The first apparatus obtains a first gradient of a first model, where the first gradient of the first model is generated based on the first intermediate result and the encrypted second intermediate result. After being decrypted by using a second private key, the first gradient of the first model is for updating the first model, where the second private key is a decryption key generated by the second apparatus for homomorphic encryption.
    Type: Application
    Filed: June 29, 2023
    Publication date: October 26, 2023
    Inventors: Yunfeng Shao, Bingshuai Li, Jun Wu, Haibo Tian
  • Publication number: 20230325722
    Abstract: This application discloses a model training method, and relates to the field of artificial intelligence. The method provided in this application is applicable to a machine learning system. The machine learning system includes a server and at least two client side devices. The method includes: A first client side device receives a first shared model sent by the server; outputs a first prediction result for a data set through the first shared model; obtains a first loss value based on the first prediction result; outputs a second prediction result for the data set through a first private model of the first client side device; obtains a second loss value based on the second prediction result; and performs second combination processing on the first loss value and the second loss value to obtain a third loss value, where the third loss value is used to update the first private model.
    Type: Application
    Filed: June 2, 2023
    Publication date: October 12, 2023
    Inventors: De-Chuan ZHAN, Xinchun LI, Shaoming SONG, Yunfeng SHAO, Bingshuai LI, Li QIAN
  • Publication number: 20230237333
    Abstract: A machine learning model training method is applied to a first client, a plurality of clients are communicatively connected to a server, the server stores a plurality of modules, and the plurality of modules are configured to construct at least two machine learning models. The method includes: obtaining a first machine learning model, where at least one first machine learning model is selected based on a data feature of a first training data set stored in the first client; performing a training operation on the at least one first machine learning model by using the first data set, to obtain at least one trained first machine learning model; and sending at least one updated module to the server, where the updated module is used by the server to update weight parameters of the stored modules.
    Type: Application
    Filed: March 17, 2023
    Publication date: July 27, 2023
    Inventors: Yunfeng SHAO, Shaoming SONG, Wenpeng LI, Kaiyang GUO, Li QIAN
  • Patent number: 11665100
    Abstract: This application provides a data stream identification method and apparatus and belongs to the field of Internet technologies. The method includes: obtaining packet transmission attribute information of N consecutive packets in a target data stream; generating feature images of the packet transmission attribute information of the N consecutive packets based on the packet transmission attribute information of the N consecutive packets; and inputting the feature images into a pre-trained image classification model, to obtain a target application identifier corresponding to the target data stream. According to this application, accuracy of identifying an application identifier corresponding to a data stream can be improved.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: May 30, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ke He, Zhitang Chen, Yunfeng Shao
  • Publication number: 20230116117
    Abstract: A method includes: A second node sends a prior distribution of a parameter in a federated model to at least one first node. After receiving the prior distribution of the parameter in the federated model, the at least one first node performs training based on the prior distribution of the parameter in the federated model and local training data of the first node, to obtain a posterior distribution of a parameter in a local model of the first node. After the local training ends, the at least one first node feeds back the posterior distribution of the parameter in the local model to the second node, so that the second node updates the prior distribution of the parameter in the federated model based on the posterior distribution of the parameter in the local model of the at least one first node.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 13, 2023
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yunfeng Shao, Kaiyang Guo, Vincent Moens, Jun Wang, Chunchun Yang
  • Publication number: 20230082173
    Abstract: The technology of this application relates to a training method that includes a first terminal obtaining a to-be-trained first machine learning model from the server. The first terminal is any one of a plurality of terminals. The first terminal trains the first machine learning model by using local data stored by the first terminal, to obtain trained model parameters. The first terminal determines, based on a collaboration relationship, a first collaborative terminal corresponding to the first terminal, and sends a part or all of the trained model parameters of the first terminal to the server by using the first collaborative terminal. The collaboration relationship is delivered by the server to the first terminal. The foregoing manner can improve security of data exchange between the server and the terminal.
    Type: Application
    Filed: November 18, 2022
    Publication date: March 16, 2023
    Inventors: Gang LI, Yunfeng SHAO, Lei ZHANG
  • Patent number: 11455511
    Abstract: A ground environment detection method and apparatus are disclosed, where the method includes: scanning a ground environment by using laser sounding signals having different operating wavelengths, receiving a reflected signal that is reflected back by the ground environment, determining scanning spot information of each scanning spot of the ground environment based on the reflected signal, determining space coordinate information and a laser reflection feature of each scanning spot based on each piece of scanning spot information, partitioning the ground environment into sub-regions having different laser reflection features, and determining a ground environment type of each sub-region.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: September 27, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Tongtong Cao, Yunfeng Shao, Jun Yao
  • Patent number: 11386350
    Abstract: The method and apparatus that are applied to a machine learning system which includes at least one parameter collection group and at least one parameter delivery group. Each parameter collection group is corresponding to at least one parameter delivery group. The method includes: when any parameter collection group meets an intra-group combination condition, combining model parameters of M nodes in the parameter collection group to obtain a first model parameter of the parameter collection group, where a smallest quantity s of combination nodes in the parameter collection group?M?a total quantity of nodes included in the parameter collection group; and sending the first model parameter of the parameter collection group to N nodes in a parameter delivery group corresponding to the parameter collection group, where 1?N?a total quantity of nodes included in the parameter delivery group corresponding to the parameter collection group.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: July 12, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yunfeng Shao, Jun Xu, Masood Mortazavi
  • Patent number: 11373116
    Abstract: Embodiments of the present invention provide a model parameter fusion method and apparatus, which relate to the field of machine learning and intend to reduce a data transmission amount and implement dynamical adjustment of computing resources during model parameter fusion. The method includes: dividing, by an ith node, a model parameter of the ith node into N blocks, where the ith node is any node of N nodes that participate in a fusion, and 1?i?N?M; receiving, by the ith node, ith model parameter blocks respectively sent by other nodes of the N nodes than the ith node; fusing, by the ith node, an ith model parameter block of the ith node and the ith model parameter blocks respectively sent by the other nodes, so as to obtain the ith general model parameter block; and distributing, by the ith node, the ith general model parameter block to the other nodes of the N nodes.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: June 28, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Jun Xu, Yunfeng Shao, Xiao Yang, Zheng Yan