Patents by Inventor Weituo HAO

Weituo HAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250078814
    Abstract: The present disclosure provides a multi-modal encoder processing method and apparatus, a computer device and a storage medium. The method includes: acquiring a pair of mask samples to be processed, the pair of mask samples including a text sample and an audio sample associated with each other, and at least one of the text sample and the audio sample is masked; based on a multi-modal encoder, generating a text encoding feature of the text sample, and generating an audio encoding feature of the audio sample, a linear spectrum feature of the audio sample being fused in the text encoding feature, and a linear word feature of the text sample being fused in the audio encoding feature; and predicting masked mask information according to the text encoding feature and the audio encoding feature, and correcting the multi-modal encoder based on an accuracy of the mask information.
    Type: Application
    Filed: August 29, 2024
    Publication date: March 6, 2025
    Inventors: Dong Guo, Zihao He, Weituo Hao, Xuchen Song, Zongyu Yin, Jingsong Gao, Wei Tsung Lu, Junyu Dai
  • Patent number: 12236370
    Abstract: Methods and devices are provided for performing federated learning. A global model is distributed from a server to a plurality of client devices. At each of the plurality of client devices: model inversion is performed on the global model to generate synthetic data; the global model is on an augmented dataset of collected data and the synthetic data to generate a respective client model; and the respective client model is transmitted to the server. At the server: client models are received from the plurality of client devices, where each client model is received from a respective client device of the plurality of client devices; model inversion is performed on each client model to generate a synthetic dataset; the client models are averaged to generate an averaged model; and the averaged model is trained using the synthetic dataset to generate an updated model.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: February 25, 2025
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Mostafa El-Khamy, Weituo Hao, Jungwon Lee
  • Publication number: 20250054474
    Abstract: Embodiments of the present disclosure provide an audio processing method and apparatus, an electronic device and a storage medium, wherein the method comprises: obtaining first music data and a processing instruction in text form associated with the first music data; extracting, by a music processing model, a first chord progression feature and an audio feature of the first music data, and a text feature of the processing instruction; processing, by the music processing model, the audio feature in accordance with the first chord progression feature and the text feature, to generate second music data; wherein a similarity between a first chord progression feature of the first music data and a second chord progression feature of the second music data is greater than a similarity threshold.
    Type: Application
    Filed: August 7, 2024
    Publication date: February 13, 2025
    Inventors: Junyu DAI, Bing HAN, Xuchen SONG, Weituo HAO, Xinyan HE, Dong GUO
  • Publication number: 20220058507
    Abstract: Methods and devices are provided for performing federated learning. A global model is distributed from a server to a plurality of client devices. At each of the plurality of client devices: model inversion is performed on the global model to generate synthetic data; the global model is on an augmented dataset of collected data and the synthetic data to generate a respective client model; and the respective client model is transmitted to the server. At the server: client models are received from the plurality of client devices, where each client model is received from a respective client device of the plurality of client devices: model inversion is performed on each client model to generate a synthetic dataset; the client models are averaged to generate an averaged model; and the averaged model is trained using the synthetic dataset to generate an updated model.
    Type: Application
    Filed: February 19, 2021
    Publication date: February 24, 2022
    Inventors: Mostafa El-Khamy, Weituo Hao, Jungwon Lee
  • Publication number: 20210374608
    Abstract: A federated machine-learning system includes a global server and client devices. The server receives updates of weight factor dictionaries and factor strengths vectors from the clients, and generates a globally updated weight factor dictionary and a globally updated factor strengths vector. A client device selects a group of parameters from a global group of parameters, and trains a model using a dataset of the client device and the group of selected parameters. The client device sends to the server a client-updated weight factor dictionary and a client-updated factor strengths vector. The client device receives the globally updated weight factor dictionary and the globally updated factor strengths vector, and retrains the model using the dataset of the client device, the group of parameters selected by the client device, and the globally updated weight factor dictionary and the globally updated factor strengths vector.
    Type: Application
    Filed: January 13, 2021
    Publication date: December 2, 2021
    Inventors: Mostafa EL-KHAMY, Jungwon LEE, Weituo HAO, Lawrence CARIN, Nikhil MEHTA, Kevin J. LIANG