Patents by Inventor Can GAO

Can GAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12277401
    Abstract: The present disclosure discloses a method and apparatus for acquiring a pre-trained model, and relates to natural language processing and deep learning technologies in the field of artificial intelligence technologies. An implementation includes: acquiring training data, the training data including a single-modal language material and a multi-modal language material, and the multi-modal language material including a language material pair formed by a first-modal language material and a second-modal language material; and performing a multi-task training operation on a pre-trained model using the training data, the multi-task including at least one cross-modal contrastive learning task and at least one single-modal learning task; the pre-trained language model obtained in the present disclosure may learn from different forms of language materials, i.e.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: April 15, 2025
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Guocheng Niu, Wei Li, Can Gao, Xinyan Xiao, Hua Wu
  • Publication number: 20240423011
    Abstract: A planar light-emitting transistor device capable of surface light source emission, and a preparation method therefor and an application thereof are provided. The planar light-emitting transistor device has a charge buffer layer inserted between a semiconductor charge transport layer and a light-emitting unit, such that a prepared planar light-emitting transistor can realize stable surface light source emission, thereby effectively overcoming the defect of the emergent light of a traditional planar light-emitting transistor being linear or strip-shaped. The planar light-emitting transistor device capable of surface light source emission has a high integration level, can realize stable surface light source emission, effectively improves the aperture ratio of the transistor device, has a good gate tunable capability, high loop stability and any tunability, and can be easily miniaturized.
    Type: Application
    Filed: October 27, 2022
    Publication date: December 19, 2024
    Inventors: Huanli DONG, Zhagen MIAO, Haikuo GAO, Wenping HU, Can GAO, Man ZHAO
  • Publication number: 20240400604
    Abstract: A nucleoside triphosphate photoaffinity probe, method for preparing same, and applications thereof. The nucleoside triphosphate photoaffinity probe is a novel small molecule activity probe for detecting nucleoside triphosphate-binding proteins and is based on the structure of nucleoside triphosphates (GTP and ATP) connected to a smaller photoaffinity side chain modification. This probe can effectively label nucleoside triphosphate-binding proteins in cell lysates for high-throughput proteomics analysis, identify and analyze the binding sites of the probe, and can also be used to analyze the action sites of nucleoside triphosphate competitive inhibitors, thus having significant practical application value.
    Type: Application
    Filed: June 3, 2024
    Publication date: December 5, 2024
    Applicant: SHANDONG UNIVERSITY
    Inventors: Rong CAI, Can GAO, Mengxuan LI, Zhiming WANG, Jing TAN, Wenwen LI, Jing XU
  • Patent number: 11537792
    Abstract: The present disclosure provides a pre-training method for a sentiment analysis model and an electronic device, which relates to a field of artificial intelligence technologies. The method includes: based on a given seed sentiment dictionary, performing sentimental knowledge detection on a training corpus in a training corpus set, and determining a detection sentiment word and a detection word pair of the training corpus; according to preset mask processing rules, performing mask process on the training corpus to generate a masked corpus; performing encoding and decoding on the masked corpus by using a preset encoder and decoder to determine the detection sentiment word and the detection word pair of the training corpus; and updating the preset encoder and decoder according to a difference between prediction sentiment word and the detection sentiment word, and a difference between prediction word pair and the detection word pair.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: December 27, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Can Gao, Hao Liu, Bolei He, Xinyan Xiao, Hao Tian
  • Publication number: 20220327809
    Abstract: A method for training a model based on multi-modal data joint learning, includes: obtaining multi-modal data; in which the multi-modal data include at least one type of single-modal data and at least one type of Pair multi-modal data; inputting the single-modal data and the Pair multi-modal data into a decoupling attention Transformer network model to generate respectively Token semantic representation features and cross-modal semantic representation features; and training the decoupling attention Transformer network model based on the Token semantic representation features and the cross-modal semantic representation features.
    Type: Application
    Filed: June 27, 2022
    Publication date: October 13, 2022
    Applicant: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang
  • Publication number: 20220292269
    Abstract: The present disclosure discloses a method and apparatus for acquiring a pre-trained model, and relates to natural language processing and deep learning technologies in the field of artificial intelligence technologies. An implementation includes: acquiring training data, the training data including a single-modal language material and a multi-modal language material, and the multi-modal language material including a language material pair formed by a first-modal language material and a second-modal language material; and performing a multi-task training operation on a pre-trained model using the training data, the multi-task including at least one cross-modal contrastive learning task and at least one single-modal learning task; the pre-trained language model obtained in the present disclosure may learn from different forms of language materials, i.e.
    Type: Application
    Filed: October 15, 2021
    Publication date: September 15, 2022
    Applicant: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Guocheng NIU, Wei LI, Can GAO, Xinyan XIAO, Hua WU
  • Publication number: 20210200949
    Abstract: The present disclosure provides a pre-training method for a sentiment analysis model and an electronic device, which relates to a field of artificial intelligence technologies. The method includes: based on a given seed sentiment dictionary, performing sentimental knowledge detection on a training corpus in a training corpus set, and determining a detection sentiment word and a detection word pair of the training corpus; according to preset mask processing rules, performing mask process on the training corpus to generate a masked corpus; performing encoding and decoding on the masked corpus by using a preset encoder and decoder to determine the detection sentiment word and the detection word pair of the training corpus; and updating the preset encoder and decoder according to a difference between prediction sentiment word and the detection sentiment word, and a difference between prediction word pair and the detection word pair.
    Type: Application
    Filed: July 21, 2020
    Publication date: July 1, 2021
    Inventors: Can GAO, Hao LIU, Bolei HE, Xinyan XIAO, Hao TIAN