Patents by Inventor Dongling Xiao

Dongling Xiao has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11562150
    Abstract: The present disclosure proposes a language generation method and apparatus. The method includes: performing encoding processing on an input sequence by using a preset encoder to generate a hidden state vector corresponding to the input sequence; in response to a granularity category of a second target segment being a phrase, decoding a first target segment vector, the hidden state vector, and a position vector corresponding to the second target segment by using N decoders to generate N second target segments; determining a loss value based on differences between respective N second target segments and a second target annotated segment; and performing parameter updating on the preset encoder, a preset classifier, and the N decoders based on the loss value to generate an updated language generation model for performing language generation.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: January 24, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
  • Patent number: 11461549
    Abstract: The present disclosure discloses a method and an apparatus for generating a text based on a semantic representation and relates to a field of natural language processing (NLP) technologies. The method for generating the text includes: obtaining an input text, the input text comprising a source text; obtaining a placeholder of an ith word to be predicted in a target text; obtaining a vector representation of the ith word to be predicted, in which the vector representation of the ith word to be predicted is obtained by calculating the placeholder of the ith word to be predicted, the source text and 1st to (i?1)th predicted words by employing a self-attention mechanism; and generating an ith predicted word based on the vector representation of the ith word to be predicted, to obtain a target text.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 4, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
  • Publication number: 20220300697
    Abstract: A method for generating a target object is provided. A first discrete encoded sequence corresponding to an original object is generated by performing discrete encoding on the original object. The original object is of an image type, a text type, or a text-image-combined type. A second discrete encode sequence is obtained by inputting the first discrete encoded sequence into a generative model. A target object is generated based on the second discrete encoded sequence. The target object is of an image type or a text type. When the original object is of the image type, the target object is of the text type. When the original object is of the text type, the target object is of the image type.
    Type: Application
    Filed: June 8, 2022
    Publication date: September 22, 2022
    Inventors: Yukun LI, Han ZHANG, Weichong YIN, Dongling XIAO, Yu SUN, Hao TIAN
  • Publication number: 20220129768
    Abstract: The present disclosure provides a method and apparatus for training a model. The method can include: acquiring at least one paragraph text, each paragraph text comprising a plurality of fine-grained samples; processing a fine-grained sample in the each paragraph text to obtain a coarse-grained sample; annotating the coarse-grained sample in the each paragraph text and obscuring one coarse-grained sample using a mask of one fine-grained sample to obtain a training sample set, wherein the training sample set comprises a plurality of annotated texts, and each annotated text comprises at least one of a fine-grained sample or an annotated coarse-grained sample; and training a fine-grained model using the training sample set to obtain a trained fine-grained model, the fine-grained model being used to learn content of a previous fine grain size and predict content of an adjacent coarse grain size.
    Type: Application
    Filed: January 3, 2022
    Publication date: April 28, 2022
    Inventors: Dongling XIAO, Yukun LI, Han ZHANG, Yu SUN, Hao TIAN, Hua WU, Haifeng WANG
  • Publication number: 20210232775
    Abstract: The present disclosure proposes a language generation method and apparatus. The method includes: performing encoding processing on an input sequence by using a preset encoder to generate a hidden state vector corresponding to the input sequence; in response to a granularity category of a second target segment being a phrase, decoding a first target segment vector, the hidden state vector, and a position vector corresponding to the second target segment by using N decoders to generate N second target segments; determining a loss value based on differences between respective N second target segments and a second target annotated segment; and performing parameter updating on the preset encoder, a preset classifier, and the N decoders based on the loss value to generate an updated language generation model for performing language generation.
    Type: Application
    Filed: September 24, 2020
    Publication date: July 29, 2021
    Inventors: Han ZHANG, Dongling XIAO, Yukun LI, Yu SUN, Hao TIAN, Hua WU, Haifeng WANG
  • Publication number: 20210232765
    Abstract: The present disclosure discloses a method and an apparatus for generating a text based on a semantic representation and relates to a field of natural language processing (NLP) technologies. The method for generating the text includes: obtaining an input text, the input text comprising a source text; obtaining a placeholder of an ith word to be predicted in a target text; obtaining a vector representation of the ith word to be predicted, in which the vector representation of the ith word to be predicted is obtained by calculating the placeholder of the ith word to be predicted, the source text and 1st to (i?1)th predicted words by employing a self-attention mechanism; and generating an ith predicted word based on the vector representation of the ith word to be predicted, to obtain a target text.
    Type: Application
    Filed: August 10, 2020
    Publication date: July 29, 2021
    Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang