Patents by Inventor Ruiqing ZHANG

Ruiqing ZHANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12282746
    Abstract: A display method, an electronic device, and a storage medium, which relate to a field of natural language processing and a field of display. The display method includes: acquiring a content to be displayed; extracting a target term from the content using a term extraction rule; acquiring an annotation information for at least one target term, responsive to an extraction of the at least one target term; and displaying the annotation information for the at least one target term and the content.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: April 22, 2025
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Haifeng Wang, Zhanyi Liu, Zhongjun He, Hua Wu, Zhi Li, Xing Wan, Jingxuan Zhao, Ruiqing Zhang, Chuanqiang Zhang, Fengtao Huang, Hanbing Song, Wei Di, Shuangshuang Cui, Yongzheng Xin
  • Patent number: 12277387
    Abstract: A text processing method is provided. The method includes: a first probability value of each candidate character of a plurality of candidate characters corresponding to a target position is determined based on character feature information corresponding to the target position in a text fragment to be processed, wherein the character feature information is determined based on a context at the target position in the text fragment to be processed; a second probability value of each candidate character of the plurality of candidate characters is determined based on a character string including the candidate character and at least one character in at least one position in the text fragment to be processed adjacent to the target position; and a correction character at the target position is determined based on the first probability value and the second probability value of each candidate character of the plurality of candidate characters.
    Type: Grant
    Filed: November 16, 2022
    Date of Patent: April 15, 2025
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing Zhang, Zhongjun He, Zhi Li, Hua Wu
  • Patent number: 12265790
    Abstract: Disclosed are a method for correcting a text, an electronic device and a storage medium. The method includes: acquiring a text to be corrected; acquiring a phonetic symbol sequence of the text to be corrected; and obtaining a corrected text by inputting the text to be corrected and the phonetic symbol sequence into a text correction model, in which, the text correction model obtains the corrected text by: detecting an error word in the text to be corrected, determining a phonetic symbol corresponding to the error word in the phonetic symbol sequence, and adding the phonetic feature corresponding to the phonetic symbol behind the error word to obtain a phonetic symbol text, and correcting the error word and the phonetic feature in the phonetic symbol text to obtain the corrected text.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: April 1, 2025
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing Zhang, Zhongjun He, Hua Wu
  • Publication number: 20250094460
    Abstract: A query answering method, an electronic device, a storage medium, and an intelligent agent are provided, which relate to a field of artificial intelligence technology, and in particular to fields of large model, intelligent search and information processing technology. The method includes: inputting, in response to a retrieval content set retrieved based on a query, the query, the retrieval content set and prompt information for answer generation into the large model, so that the large model performs operations of: processing, based on a current task in the prompt information and the query, a current text corresponding to the retrieval content set to obtain a processed text, where the current task is determined based on a task execution order in the prompt information; and obtaining, in a case of determining that the processed text meets a preset condition, an answer to the query based on the processed text.
    Type: Application
    Filed: December 5, 2024
    Publication date: March 20, 2025
    Inventors: Haifeng WANG, Hua WU, Hao TIAN, Jing LIU, Ruiqing ZHANG, Yan CHEN, Yu RAN
  • Patent number: 12236203
    Abstract: A translation method, a model training method, apparatuses, electronic devices and storage mediums, which relate to the field of artificial intelligence technologies, such as machine learning technologies, information processing technologies, are disclosed. In an implementation, a weight for each translation model in at least two pre-trained translation models translating a to-be-translated specified sentence is acquired based on the specified sentence and a pre-trained weighting model; and the specified sentence is translating using the at least two translation models based on the weight for each translation model translating the specified sentence.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: February 25, 2025
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing Zhang, Xiyang Wang, Hui Liu, Zhongjun He, Zhi Li, Hua Wu
  • Publication number: 20250054494
    Abstract: A method for training a speech translation model includes: obtaining a trained first text translation model and a speech recognition model, and constructing a candidate speech translation model to be trained based on the first text translation model and the speech recognition model; obtaining at least one of a first sample source language speech or a first sample source language text to obtain a training sample of the candidate speech translation model; and training the candidate speech translation model based on the training sample until the training is completed, and obtaining a trained target speech translation model.
    Type: Application
    Filed: October 29, 2024
    Publication date: February 13, 2025
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Pengzhi Gao, Ruiqing Zhang, Zhongjun He, Hua Wu
  • Patent number: 12210956
    Abstract: The present disclosure provides a translation method and apparatus, an electronic device, and a non-transitory storage medium. An implementation includes: determining an encoded feature of a sentence to be translated by an encoding module; determining, by a graph network module, a knowledge fusion feature of the sentence to be translated based on a preset graph network, wherein the preset graph network is constructed based on a polysemous word in a source language corresponding to the sentence to be translated and a plurality of translated words corresponding to the polysemous word in a target language; determining, by a decoding network, a translated sentence corresponding to the sentence to be translated based on the encoded feature and the knowledge fusion feature.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: January 28, 2025
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Ruiqing Zhang, Hui Liu, Zhongjun He, Zhi Li, Hua Wu
  • Patent number: 12197882
    Abstract: A translation method, an electronic device and a storage medium, which relate to the field of artificial intelligence technologies, such as machine learning technologies, information processing technologies, are disclosed. An implementation includes: acquiring an intermediate translation result generated by each of multiple pre-trained translation models for a to-be-translated specified sentence in a same iteration of a translation process, so as to obtain multiple intermediate translation results; acquiring a co-occurrence word based on the multiple intermediate translation results; and acquiring a target translation result of the specified sentence based on the co-occurrence word.
    Type: Grant
    Filed: August 10, 2022
    Date of Patent: January 14, 2025
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing Zhang, Xiyang Wang, Zhongjun He, Zhi Li, Hua Wu
  • Publication number: 20240220812
    Abstract: A method for training a machine translation (MT) model and an electronic device are provided. The technical solution includes: obtaining an original training sample configured to train an MT model; generating at least one adversarial training sample of the MT model based on the original training sample; training the MT model based on the original training sample and the adversarial training sample.
    Type: Application
    Filed: September 14, 2021
    Publication date: July 4, 2024
    Applicant: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Ruiqing ZHANG, Xiyang WANG, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230342561
    Abstract: A machine translation method includes: obtaining first target language text by performing first translation on source language text using an initial NMT model; identifying an untranslated part in the source language text based on the source language text and the first target language text; obtaining an adjusted NMT model by increasing an attention weight corresponding to the untranslated part in the initial NMT mode; and obtaining second target language text by performing second translation on the source language text using the adjusted NMT model.
    Type: Application
    Filed: March 16, 2023
    Publication date: October 26, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing ZHANG, Hui LIU, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230342560
    Abstract: A text translation method is described that includes initially acquiring text. Thereafter, first text is determined in the initial text; and second text is determined according to the first text, where the second text is used for describing the first text. Additionally, initial text is translated to obtain initial translation text, and the second text is translated to obtain description translation text. Thereafter, the initial translation text is updated according to the description translation text to obtain target translation text of the initial text.
    Type: Application
    Filed: March 14, 2023
    Publication date: October 26, 2023
    Inventors: Ruiqing ZHANG, Hui LIU, Xiyang WANG, Zhongjun HE, Zhi LI, Hua WU
  • Patent number: 11704498
    Abstract: A method and apparatus for training models in machine translation, an electronic device and a storage medium are disclosed, which relates to the field of natural language processing technologies and the field of deep learning technologies. An implementation includes mining similar target sentences of a group of samples based on a parallel corpus using a machine translation model and a semantic similarity model, and creating a first training sample set; training the machine translation model with the first training sample set; mining a negative sample of each sample in the group of samples based on the parallel corpus using the machine translation model and the semantic similarity model, and creating a second training sample set; and training the semantic similarity model with the second training sample set.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: July 18, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Zhi Li, Hua Wu
  • Publication number: 20230209932
    Abstract: Provided are a display panel and a manufacturing method thereof, and a display device. The display panel includes a base substrate and a plurality of sub-pixels. At least one sub-pixel includes a light-emitting element; a first transistor; a capacitor including a first electrode and a second electrode plate; a second transistor including a second active layer including a third and a fourth electrode area, and a second channel area; and a third transistor including a third active layer, and a third gate electrode connected to a reset line, the third active layer including a fifth electrode area, a sixth electrode area directly connected to an initialization line via a via hole, and a third channel area. An orthographic projection of the initialization line on the base substrate is located between orthographic projections of the reset line and the gate line on the base substrate.
    Type: Application
    Filed: October 22, 2021
    Publication date: June 29, 2023
    Applicants: ORDOS YUANSHENG OPTOELECTRONICS CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Xu WANG, Zihua LI, Lu CAI, Qiang GUO, Wenqiang JIN, Chunbo LI, Qiang WANG, Xudong WANG, Ruiqing ZHANG
  • Publication number: 20230196026
    Abstract: A method for evaluating a text content, which may include: after splitting a to-be-evaluated text into a plurality of clauses arranged in sequence according to punctuation information of the to-be-evaluated text, determining a first clause of the plurality of clauses as an actual tune name; then, determining actual prosodic information based on a Chinese phonetic alphabet text of a third clause to a last clause in response to that a number of clauses, whose numbers of Chinese characters satisfy character count requirements of clauses corresponding to the actual tune name, from the third clause to the last clause exceeds a number threshold; and finally, in response to the actual prosodic information being consistent with a standard prosodic information of the actual tune name, evaluating the to-be-evaluated text as a Ci-poetry text.
    Type: Application
    Filed: February 14, 2023
    Publication date: June 22, 2023
    Inventors: Xiyang WANG, Ruiqing ZHANG, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230153543
    Abstract: A translation method, a model training method, apparatuses, electronic devices and storage mediums, which relate to the field of artificial intelligence technologies, such as machine learning technologies, information processing technologies, are disclosed. In an implementation, a weight for each translation model in at least two pre-trained translation models translating a to-be-translated specified sentence is acquired based on the specified sentence and a pre-trained weighting model; and the specified sentence is translating using the at least two translation models based on the weight for each translation model translating the specified sentence.
    Type: Application
    Filed: September 23, 2022
    Publication date: May 18, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing ZHANG, Xiyang WANG, Hui LIU, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230153548
    Abstract: A translation method, an electronic device and a storage medium, which relate to the field of artificial intelligence technologies, such as machine learning technologies, information processing technologies, are disclosed. An implementation includes: acquiring an intermediate translation result generated by each of multiple pre-trained translation models for a to-be-translated specified sentence in a same iteration of a translation process, so as to obtain multiple intermediate translation results; acquiring a co-occurrence word based on the multiple intermediate translation results; and acquiring a target translation result of the specified sentence based on the co-occurrence word.
    Type: Application
    Filed: August 10, 2022
    Publication date: May 18, 2023
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Ruiqing ZHANG, Xiyang WANG, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230140997
    Abstract: A method and apparatus for selecting a sample corpus used to optimize a translation model, an electronic device, a computer readable storage medium, and a computer program product are provided. The method includes: after acquiring a first corpus, translating the first corpus by using a to-be-optimized translation model to acquire a second corpus with different types of languages, then translating the second corpus by using the to-be-optimized translation model to acquire a third corpus, then determining a difficulty level of the first corpus based on a similarity between the first corpus and the third corpus, and finally determining the first corpus as a sample corpus used to perform optimization training on the to-be-optimized translation model in response to the difficulty level satisfying requirements of a difficulty level threshold.
    Type: Application
    Filed: December 27, 2022
    Publication date: May 11, 2023
    Inventors: Ruiqing ZHANG, Xiyang WANG, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230095352
    Abstract: The present disclosure provides a translation method and apparatus, an electronic device, and a non-transitory storage medium. An implementation includes: determining an encoded feature of a sentence to be translated by an encoding module; determining, by a graph network module, a knowledge fusion feature of the sentence to be translated based on a preset graph network, wherein the preset graph network is constructed based on a polysemous word in a source language corresponding to the sentence to be translated and a plurality of translated words corresponding to the polysemous word in a target language; determining, by a decoding network, a translated sentence corresponding to the sentence to be translated based on the encoded feature and the knowledge fusion feature.
    Type: Application
    Filed: February 5, 2022
    Publication date: March 30, 2023
    Inventors: Ruiqing ZHANG, Hui LIU, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230101401
    Abstract: A text processing method is provided. The method includes: a first probability value of each candidate character of a plurality of candidate characters corresponding to a target position is determined based on character feature information corresponding to the target position in a text fragment to be processed, wherein the character feature information is determined based on a context at the target position in the text fragment to be processed; a second probability value of each candidate character of the plurality of candidate characters is determined based on a character string including the candidate character and at least one character in at least one position in the text fragment to be processed adjacent to the target position; and a correction character at the target position is determined based on the first probability value and the second probability value of each candidate character of the plurality of candidate characters.
    Type: Application
    Filed: November 16, 2022
    Publication date: March 30, 2023
    Inventors: Ruiqing ZHANG, Zhongjun HE, Zhi LI, Hua WU
  • Publication number: 20230090625
    Abstract: Disclosed are a method for correcting a text, an electronic device and a storage medium. The method includes: acquiring a text to be corrected; acquiring a phonetic symbol sequence of the text to be corrected; and obtaining a corrected text by inputting the text to be corrected and the phonetic symbol sequence into a text correction model, in which, the text correction model obtains the corrected text by: detecting an error word in the text to be corrected, determining a phonetic symbol corresponding to the error word in the phonetic symbol sequence, and adding the phonetic feature corresponding to the phonetic symbol behind the error word to obtain a phonetic symbol text, and correcting the error word and the phonetic feature in the phonetic symbol text to obtain the corrected text.
    Type: Application
    Filed: November 7, 2022
    Publication date: March 23, 2023
    Inventors: Ruiqing ZHANG, Zhongjun HE, Hua WU