Patents by Inventor Jialiang JIANG

Jialiang JIANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11734521
    Abstract: A method includes: a bidirectional translation model to be trained and training data are acquired, the training data including a source corpus and a target corpus corresponding to the source corpus; the bidirectional translation model is trained for N cycles, each cycle of training including a forward translation process of translating the source corpus into a pseudo target corpus and a reverse translation process of translating the pseudo target corpus into a pseudo source corpus and N being a positive integer greater than 1; a forward translation similarity and a reverse translation similarity are acquired; and when a sum of the forward translation similarity and the reverse translation similarity converges, it is determined that training of the bidirectional translation model is completed, where the training completed bidirectional translation model is used to perform translating.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 22, 2023
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Jialiang Jiang, Xiang Li, Jianwei Cui
  • Patent number: 11556761
    Abstract: A method for compressing a neural network model includes: obtaining a first trained teacher model and a second trained teacher model based on N training samples, N being a positive integer greater than 1; for each of the N training samples, determining a first guide component of the first teacher model and a second guide component of the second teacher model respectively, determining a sub optimization target corresponding to the training sample and configured to optimize a student model according to the first guide component and the second guide component, and determining a joint optimization target based on each of the N training samples and a sub optimization target corresponding to the training sample; and training the student model based on the joint optimization target.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: January 17, 2023
    Assignee: Beijing Xiaomi Intelligent Technology Co., Ltd.
    Inventors: Xiang Li, Yuhui Sun, Jingwei Li, Jialiang Jiang
  • Patent number: 11556723
    Abstract: A method for compressing a neural network model, includes: obtaining a set of training samples including a plurality of pairs of training samples, each pair of the training samples including source data and target data corresponding to the source data; training an original teacher model by using the source data as an input and using the target data as verification data; training intermediate teacher models based on the set of training samples and the original teacher model, one or more intermediate teacher models forming a set of teacher models; training multiple candidate student models based on the set of training samples, the original teacher model, and the set of teacher models, the multiple candidate student models forming a set of student models; and selecting a candidate student model of the multiple candidate student models as a target student model according to training results of the multiple candidate student models.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: January 17, 2023
    Assignee: Beijing Xiaomi Intelligent Technology Co., Ltd.
    Inventors: Xiang Li, Yuhui Sun, Jialiang Jiang, Jianwei Cui
  • Publication number: 20210174019
    Abstract: A method includes: a bidirectional translation model to be trained and training data are acquired, the training data including a source corpus and a target corpus corresponding to the source corpus; the bidirectional translation model is trained for N cycles, each cycle of training including a forward translation process of translating the source corpus into a pseudo target corpus and a reverse translation process of translating the pseudo target corpus into a pseudo source corpus and N being a positive integer greater than 1; a forward translation similarity and a reverse translation similarity are acquired; and when a sum of the forward translation similarity and the reverse translation similarity converges, it is determined that training of the bidirectional translation model is completed, where the training completed bidirectional translation model is used to perform translating.
    Type: Application
    Filed: May 8, 2020
    Publication date: June 10, 2021
    Applicant: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Jialiang JIANG, Xiang LI, Jianwei CUI
  • Publication number: 20210158126
    Abstract: A method for compressing a neural network model includes: obtaining a first trained teacher model and a second trained teacher model based on N training samples, N being a positive integer greater than 1; for each of the N training samples, determining a first guide component of the first teacher model and a second guide component of the second teacher model respectively, determining a sub optimization target corresponding to the training sample and configured to optimize a student model according to the first guide component and the second guide component, and determining a joint optimization target based on each of the N training samples and a sub optimization target corresponding to the training sample; and training the student model based on the joint optimization target.
    Type: Application
    Filed: March 24, 2020
    Publication date: May 27, 2021
    Inventors: Xiang LI, Yuhui SUN, Jingwei LI, Jialiang JIANG
  • Publication number: 20210124881
    Abstract: A method for compressing a neural network model, includes: obtaining a set of training samples including a plurality of pairs of training samples, each pair of the training samples including source data and target data corresponding to the source data; training an original teacher model by using the source data as an input and using the target data as verification data; training intermediate teacher models based on the set of training samples and the original teacher model, one or more intermediate teacher models forming a set of teacher models; training multiple candidate student models based on the set of training samples, the original teacher model, and the set of teacher models, the multiple candidate student models forming a set of student models; and selecting a candidate student model of the multiple candidate student models as a target student model according to training results of the multiple candidate student models.
    Type: Application
    Filed: February 7, 2020
    Publication date: April 29, 2021
    Inventors: Xiang LI, Yuhui SUN, Jialiang JIANG, Jianwei CUI