Patents by Inventor Yixing XU

Yixing XU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240005164
    Abstract: A neural network training method includes performing, in a forward propagation process, binarization processing on a target weight by using a binarization function, and using data obtained through the binarization processing as a weight of a first neural network layer in a neural network; and calculating, in a backward propagation process, a gradient of a loss function with respect to the target weight by using a gradient of a fitting function as a gradient of the binarization function.
    Type: Application
    Filed: July 31, 2023
    Publication date: January 4, 2024
    Inventors: Yixing Xu, Kai Han, Yehui Tang, Yunhe Wang, Chunjing Xu
  • Publication number: 20230401446
    Abstract: Embodiments of this application disclose a convolutional neural network pruning processing method, a data processing method, and a device, which may be applied to the field of artificial intelligence. The convolutional neural network pruning processing method includes: performing sparse training on a convolutional neural network by using a constructed objective loss function, where the objective loss function may include three sub-loss functions.
    Type: Application
    Filed: August 25, 2023
    Publication date: December 14, 2023
    Inventors: Yehui TANG, Yixing XU, Yunhe WANG, Chunjing XU
  • Publication number: 20230153615
    Abstract: The technology of this application relates to a neural network distillation method, applied to the field of artificial intelligence, and includes processing to-be-processed data by using a first neural network and a second neural network to obtain a first target output and a second target output, where the first target output is obtained by performing kernel function-based transformation on an output of the first neural network layer, and the second target output is obtained by performing kernel function-based transformation on an output of the second neural network layer. The method further includes performing knowledge distillation on the first neural network based on a target loss constructed by using the first target output and the second target output.
    Type: Application
    Filed: December 28, 2022
    Publication date: May 18, 2023
    Inventors: Yixing XU, Xinghao CHEN, Yunhe WANG, Chunjing XU
  • Publication number: 20220327363
    Abstract: A neural network training method in the artificial intelligence field includes: inputting training data into a neural network; determining a first input space of a second target layer in the neural network based on a first output space of a first target layer in the neural network; and inputting a feature vector in the first input space into the second target layer, where a capability of fitting random noise by the neural network when the feature vector in the first input space is input into the second target layer is lower than a capability of fitting the random noise by using an output space that is in the neural network and that exists when a feature vector in the first output space is input into the second target layer. This application helps avoid an overfitting phenomenon that occurs when the neural network processes an image, text, or speech.
    Type: Application
    Filed: June 23, 2022
    Publication date: October 13, 2022
    Inventors: Yixing Xu, Yehui Tang, Li Qian, Yunhe Wang, Chunjing Xu
  • Publication number: 20220180199
    Abstract: This application provides a neural network model compression method in the field of artificial intelligence. The method includes: obtaining, by a server, a first neural network model and training data of the first neural network that are uploaded by user equipment; obtaining a PU classifier based on the training data of the first neural network and unlabeled data stored in the server; selecting, by using the PU classifier, extended data from the unlabeled data stored in the server, where the extended data has a property and distribution similar to a property and distribution of the training data of the first neural network model; and training a second neural network model by using a knowledge distillation (KD) method based on the extended data, where the first neural network model is used as a teacher network model and the second neural network model is used as a student network model.
    Type: Application
    Filed: February 25, 2022
    Publication date: June 9, 2022
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yixing XU, Hanting CHEN, Kai HAN, Yunhe WANG, Chunjing XU
  • Publication number: 20210312261
    Abstract: The present application discloses a neural network search method in the field of artificial intelligence, and the neural network search method includes: obtaining a feature tensor of each of a plurality of neural networks, where the feature tensor of each neural network is used to represent a computing capability of the neural network; inputting the feature tensor of each of the plurality of neural networks into an accuracy prediction model for calculation, to obtain accuracy of each neural network, where the accuracy prediction model is obtained through training based on a ranking-based loss function; and determining a neural network corresponding to the maximum accuracy as a target neural network. Embodiments of the present invention help improve accuracy of a network structure found through search.
    Type: Application
    Filed: April 1, 2021
    Publication date: October 7, 2021
    Inventors: Yixing XU, Kai HAN, Yunhe WANG, Chunjing XU, Qi TIAN