Patents by Inventor Boyan ZHOU

Boyan ZHOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160925
    Abstract: There are provided method, apparatus, device, and medium for determining update gradient for contrastive learning model. In the method, a gradient factor of a first type for the contrastive learning model is determined based on a first group of training data and a second group of training data for training the contrastive learning model. The gradient factor of the first type is not used for backpropagation during a training process. In a first stage of the training process, a gradient factor of a second type associated with the first group of training data is determined based on the contrastive learning model. The gradient factor of the second type is used for backpropagation during the training process. Gradient is obtained for updating the contrastive learning model based on the gradient factor of the first type and the gradient factor of the second type associated with the first group of training data.
    Type: Application
    Filed: September 22, 2023
    Publication date: May 16, 2024
    Inventors: Hao Wu, Yu Guo, Quan Cui, Boyan Zhou, Cheng Yang
  • Publication number: 20240152760
    Abstract: A method of training and applying contrastive learning model. The method includes obtaining a sample set and label information for training contrastive learning model, the sample set including a plurality of first samples of a first modality and a plurality of second samples of a second modality, the label information indicating a correlation between samples of the plurality of first samples and samples of the plurality of second samples; determining whether sample mixing is to be performed on the first modality or the second modality; in accordance with a determination that sample mixing is to be performed on the first modality, generating at least one first mixed sample of the first modality by mixing at least one pair of first samples among the plurality of first samples; and training the contrastive learning model at least based on the at least one first mixed sample and first mixed label information.
    Type: Application
    Filed: September 22, 2023
    Publication date: May 9, 2024
    Inventors: Hao Wu, Quan Cui, Boyan Zhou, Cheng Yang
  • Publication number: 20240144007
    Abstract: A method of contrastive learning comprises: determining, based on a model construction criterion, a first encoder for a first modality and a second encoder for a second modality; constructing a first contrastive learning model, the first contrastive learning model comprising the first encoder and a third encoder for the second modality, and a model capacity of the third encoder being greater than a model capacity of the second encoder; performing pre-training of the first contrastive learning model based on a first training dataset for the first modality and the second modality; and providing the pre-trained first encoder in the pre-trained first contrastive learning model for a downstream task. Because only the model capacity of one encoder is increased in the pre-training stage, model performance may be improved without increasing model training overhead during downstream task fine-tuning and model running overhead during model application.
    Type: Application
    Filed: September 22, 2023
    Publication date: May 2, 2024
    Inventors: Hao Wu, Boyan Zhou, Quan Cui, Cheng Yang
  • Publication number: 20240144100
    Abstract: Methods, apparatuses, a device, and a medium for training a contrastive learning model are provided. In a method, a plurality of sample sets for training the contrastive learning model are obtained, and the plurality of sample sets comprises a first sample set and a second sample set. A first target sample set is selected from the first sample set and the second sample set according to a predetermined rule. A first set of samples are determined based on the first target sample set according to a predefined batch size. The contrastive learning model is trained using the first set of samples. In this way, on the one hand, performance degradation of the contrastive learning model due to sample set bias may be avoided; on the other hand, a forgetting problem in the training process may be alleviated.
    Type: Application
    Filed: October 27, 2023
    Publication date: May 2, 2024
    Inventors: Hao Wu, Boyan Zhou, Quan Cui, Cheng Yang
  • Publication number: 20220092306
    Abstract: A cloud platform-based garlic crop recognition method by coupling active and passive remote sensing images includes: firstly, obtaining an optical satellite remote sensing image based on phenological characteristics of garlic, and constructing a decision tree model for optical image recognition of the garlic by combining geographic coordinate information of the garlic, so as to obtain an optical distribution diagram of the garlic; secondly, obtaining radar image characteristics of the garlic and winter wheat based on a synthetic aperture radar satellite, and constructing a decision tree model for radar image recognition of the garlic by combining the geographic coordinate information of the garlic, so as to obtain a radar distribution diagram of the garlic; and finally, coupling the optical distribution diagram of the garlic with the radar distribution diagram of the garlic, i.e., selecting an intersection of the two distribution diagrams to complete remote sensing recognition drawing of the garlic.
    Type: Application
    Filed: November 22, 2021
    Publication date: March 24, 2022
    Applicant: Henan University
    Inventors: Haifeng TIAN, Yaochen QIN, Wei SHEN, Boyan ZHOU, Yongjiu WANG