Patents by Inventor Yonggang Deng

Yonggang Deng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240158406
    Abstract: The present application relates to a piperazine-based compound and a composition thereof, and the use thereof in the manufacture of an anti-tumor medicament.
    Type: Application
    Filed: November 21, 2023
    Publication date: May 16, 2024
    Inventors: Yonggang WEI, Yuqin ZHU, Hongzhu CHU, Fei YE, Wutong DENG, Wei LIU, Yi SUN
  • Publication number: 20240152719
    Abstract: An information management method, a device, a system and a medium. The information management method applied to a first terminal, including: sending an information acquisition request to a server, so that the server responds to the information acquisition request and sends visitor information to the first terminal; receiving the visitor information sent by the server; generating card information according to the visitor information; and sending the card information to an electronic card, so that the electronic card displays the card information.
    Type: Application
    Filed: June 18, 2021
    Publication date: May 9, 2024
    Applicants: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Shuo Li, Junpeng Han, Jinmiao Tang, Yanjun Liu, Qingqing Ma, Yinan Gao, Yin Yuan, Tianjiao Wang, Hui Sun, Yonggang Xie, Guangquan Wang, Liguang Deng, Zixi Qi
  • Patent number: 10970278
    Abstract: A server computing device, including memory storing a knowledge graph. The server computing device may further include a processor configured to receive a natural language input and generate a tokenized utterance based on the natural language input. Based on the tokenized utterance, the processor may select a predefined intention indicating a target ontology entity type of the natural language input. The processor may identify at least one input ontology entity token included in the tokenized utterance and may identify at least one relation between the predefined intention and the input ontology entity token. Based on the predefined intention, the at least one input ontology entity token, and the relation, the processor may generate a structured query. Based on the structured query and the knowledge graph, the processor may output an output ontology entity token having the target ontology entity type.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: April 6, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rui Yan, Yonggang Deng, Junyi Chai, Maochen Guan, Yujie He, Bing Li
  • Patent number: 10916237
    Abstract: A server computing device, including memory storing a knowledge graph including a plurality of ontology entities connected by a plurality of edges. The server computing device may further include a processor configured to generate a glossary file based on the knowledge graph. The glossary file may include a plurality of ontology entities included in the knowledge graph. The processor may receive a plurality of utterance templates. Each utterance template may include an utterance and a predefined intention. For each utterance template, the processor may generate one or more utterance template copies in which one or more ontology entities included in the utterance are replaced with one or more utterance template fields. The processor may generate a plurality of training utterances at least in part by filling the one or more utterance template fields of the one or more utterance template copies with respective ontology entities included in the glossary file.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: February 9, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rui Yan, Yonggang Deng, Junyi Chai, Maochen Guan, Yujie He, Bing Li
  • Patent number: 10867132
    Abstract: A server computing device, including memory storing a knowledge graph including a plurality of ontology entities. The server computing device may further include a processor configured to receive a tokenized utterance including a plurality of words and one or more metadata tokens. The processor may extract a respective word embedding vector from each word included in the tokenized utterance. Based on a glossary file, the processor may determine a respective ontology entity type of each word included in the tokenized utterance. The processor may extract a character embedding vector from each character included in the tokenized utterance. Based on the plurality of word embedding vectors, the plurality of respective ontology entity types of the words, and the plurality of character embedding vectors, the processor may determine a predefined intention of the tokenized utterance using at least one recurrent neural network. The predefined intention may indicate a target ontology entity type.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rui Yan, Junyi Chai, Maochen Guan, Yujie He, Bing Li, Yonggang Deng
  • Publication number: 20200312300
    Abstract: A server computing device, including memory storing a knowledge graph including a plurality of ontology entities connected by a plurality of edges. The server computing device may further include a processor configured to generate a glossary file based on the knowledge graph. The glossary file may include a plurality of ontology entities included in the knowledge graph. The processor may receive a plurality of utterance templates. Each utterance template may include an utterance and a predefined intention. For each utterance template, the processor may generate one or more utterance template copies in which one or more ontology entities included in the utterance are replaced with one or more utterance template fields. The processor may generate a plurality of training utterances at least in part by filling the one or more utterance template fields of the one or more utterance template copies with respective ontology entities included in the glossary file.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Rui YAN, Yonggang DENG, Junyi CHAI, Maochen GUAN, Yujie HE, Bing LI
  • Publication number: 20200311199
    Abstract: A server computing device, including memory storing a knowledge graph including a plurality of ontology entities. The server computing device may further include a processor configured to receive a tokenized utterance including a plurality of words and one or more metadata tokens. The processor may extract a respective word embedding vector from each word included in the tokenized utterance. Based on a glossary file, the processor may determine a respective ontology entity type of each word included in the tokenized utterance. The processor may extract a character embedding vector from each character included in the tokenized utterance. Based on the plurality of word embedding vectors, the plurality of respective ontology entity types of the words, and the plurality of character embedding vectors, the processor may determine a predefined intention of the tokenized utterance using at least one recurrent neural network. The predefined intention may indicate a target ontology entity type.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Rui YAN, Junyi CHAI, Maochen GUAN, Yujie HE, Bing LI, Yonggang DENG
  • Publication number: 20200311070
    Abstract: A server computing device, including memory storing a knowledge graph. The server computing device may further include a processor configured to receive a natural language input and generate a tokenized utterance based on the natural language input. Based on the tokenized utterance, the processor may select a predefined intention indicating a target ontology entity type of the natural language input. The processor may identify at least one input ontology entity token included in the tokenized utterance and may identify at least one relation between the predefined intention and the input ontology entity token. Based on the predefined intention, the at least one input ontology entity token, and the relation, the processor may generate a structured query. Based on the structured query and the knowledge graph, the processor may output an output ontology entity token having the target ontology entity type.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 1, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Rui YAN, Yonggang DENG, Junyi CHAI, Maochen GUAN, Yujie HE, Bing LI
  • Patent number: 8566076
    Abstract: A system and method for speech translation includes a bridge module connected between a first component and a second component. The bridge module includes a transformation model configured to receive an original hypothesis output from a first component. The transformation model has one or more transformation features configured to transform the original hypothesis into a new hypothesis that is more easily translated by the second component.
    Type: Grant
    Filed: May 28, 2008
    Date of Patent: October 22, 2013
    Assignee: International Business Machines Corporation
    Inventors: Yonggang Deng, Yuqing Gao, Bing Xiang
  • Patent number: 8229729
    Abstract: A system and method for training a statistical machine translation model and decoding or translating using the same is disclosed. A source word versus target word co-occurrence matrix is created to define word pairs. Dimensionality of the matrix may be reduced. Word pairs are mapped as vectors into continuous space where the word pairs are vectors of continuous real numbers and not discrete entities in the continuous space. A machine translation parametric model is trained using an acoustic model training method based on word pair vectors in the continuous space.
    Type: Grant
    Filed: March 25, 2008
    Date of Patent: July 24, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ruhi Sarikaya, Yonggang Deng, Brian Edward Doorenbos Kingsbury, Yuqing Gao
  • Publication number: 20090299724
    Abstract: A system and method for speech translation includes a bridge module connected between a first component and a second component. The bridge module includes a transformation model configured to receive an original hypothesis output from a first component. The transformation model has one or more transformation features configured to transform the original hypothesis into a new hypothesis that is more easily translated by the second component.
    Type: Application
    Filed: May 28, 2008
    Publication date: December 3, 2009
    Inventors: Yonggang Deng, Yuging Gao, Bing Xiang
  • Publication number: 20090248394
    Abstract: A system and method for training a statistical machine translation model and decoding or translating using the same is disclosed. A source word versus target word co-occurrence matrix is created to define word pairs. Dimensionality of the matrix may be reduced. Word pairs are mapped as vectors into continuous space where the word pairs are vectors of continuous real numbers and not discrete entities in the continuous space. A machine translation parametric model is trained using an acoustic model training method based on word pair vectors in the continuous space.
    Type: Application
    Filed: March 25, 2008
    Publication date: October 1, 2009
    Inventors: Ruhi Sarikaya, Yonggang Deng, Brian Edward Doorenbos Kingsbury, Yuqing Gao
  • Patent number: 7117153
    Abstract: A method of modeling a speech recognition system includes decoding a speech signal produced from a training text to produce a sequence of predicted speech units. The training text comprises a sequence of actual speech units that is used with the sequence of predicted speech units to form a confusion model. In further embodiments, the confusion model is used to decode a text to identify an error rate that would be expected if the speech recognition system decoded speech based on the text.
    Type: Grant
    Filed: February 13, 2003
    Date of Patent: October 3, 2006
    Assignee: Microsoft Corporation
    Inventors: Milind Mahajan, Yonggang Deng, Alejandro Acero, Asela J. R. Gunawardana, Ciprian Chelba
  • Patent number: 7103544
    Abstract: A method of modeling a speech recognition system includes decoding a speech signal produced from a training text to produce a sequence of predicted speech units. The training text comprises a sequence of actual speech units that is used with the sequence of predicted speech units to form a confusion model. In further embodiments, the confusion model is used to decode a text to identify an error rate that would be expected if the speech recognition system decoded speech based on the text.
    Type: Grant
    Filed: June 6, 2005
    Date of Patent: September 5, 2006
    Assignee: Microsoft Corporation
    Inventors: Milind Mahajan, Yonggang Deng, Alejandro Acero, Asela J. R. Gunawardana, Ciprian Chelba
  • Publication number: 20050228670
    Abstract: A method of modeling a speech recognition system includes decoding a speech signal produced from a training text to produce a sequence of predicted speech units. The training text comprises a sequence of actual speech units that is used with the sequence of predicted speech units to form a confusion model. In further embodiments, the confusion model is used to decode a text to identify an error rate that would be expected if the speech recognition system decoded speech based on the text.
    Type: Application
    Filed: June 6, 2005
    Publication date: October 13, 2005
    Applicant: Microsoft Corporation
    Inventors: Milind Mahajan, Yonggang Deng, Alejandro Acero, Asela Gunawardana, Ciprian Chelba
  • Publication number: 20040162730
    Abstract: A method of modeling a speech recognition system includes decoding a speech signal produced from a training text to produce a sequence of predicted speech units. The training text comprises a sequence of actual speech units that is used with the sequence of predicted speech units to form a confusion model. In further embodiments, the confusion model is used to decode a text to identify an error rate that would be expected if the speech recognition system decoded speech based on the text.
    Type: Application
    Filed: February 13, 2003
    Publication date: August 19, 2004
    Applicant: Microsoft Corporation
    Inventors: Milind Mahajan, Yonggang Deng, Alejandro Acero, Asela J.R. Gunawardana, Ciprian Chelba
  • Patent number: D1007489
    Type: Grant
    Filed: April 12, 2023
    Date of Patent: December 12, 2023
    Assignee: SHENZHEN LANHE TECHNOLOGIES CO., LTD.
    Inventors: Ziguo Huang, Yonggang Deng