Patents by Inventor Duyu Tang
Duyu Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11947914Abstract: In embodiments of the present disclosure, there is provided an approach for fact checking based on semantic graphs. According to embodiments of the present disclosure, after obtaining a text to be fact checked, a plurality of evidence sentences related to the text are retrieved from an evidence database. Then, semantic graphs of the text and the evidence sentences are constructed based on the semantic analysis, and a veracity of a statement in the text can be determined based on the semantic graphs. Embodiments of the present disclosure propose a graph-based reasoning approach for fact checking, and use the constructed semantic graphs to facilitate verification of the truthfulness of the text, thereby improving the accuracy for fact checking.Type: GrantFiled: June 30, 2020Date of Patent: April 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Duyu Tang, Nan Duan, Ming Zhou, Jiun-Hung Chen, Pengcheng Wang, Ying Qiao
-
Publication number: 20230359443Abstract: A neural transformer model with attention is trained to predict candidates to complete a line of source code with a zero-inference capability. The model is trained on an unsupervised training dataset that includes features from source code written in multiple programming languages. The features include a file-level context and a local context, where the file-level context includes a global context, a class context, a function context, and/or a method context for each class, function and/or method of the source code programs used in the training dataset. The local context includes method bodies, function bodies, and/or stand-alone code of main method routines. From these features, the model is able to learn to predict an ordered sequence of code elements that complete a line of source code in a programming language seen and not seen during training.Type: ApplicationFiled: May 24, 2023Publication date: November 9, 2023Inventors: COLIN BRUCE CLEMENT, SHUAI LU, NEELAKANTAN SUNDARESAN, ALEXEY SVYATKOVSKIY, DUYU TANG
-
Patent number: 11693630Abstract: A neural transformer model with attention is trained to predict candidates to complete a line of source code with a zero-inference capability. The model is trained on an unsupervised training dataset that includes features from source code written in multiple programming languages. The features include a file-level context and a local context, where the file-level context includes a global context, a class context, a function context, and/or a method context for each class, function and/or method of the source code programs used in the training dataset. The local context includes method bodies, function bodies, and/or stand-alone code of main method routines. From these features, the model is able to learn to predict an ordered sequence of code elements that complete a line of source code in a programming language seen and not seen during training.Type: GrantFiled: November 1, 2022Date of Patent: July 4, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC.Inventors: Colin Bruce Clement, Shuai Lu, Neelakantan Sundaresan, Alexey Svyatkovskiy, Duyu Tang
-
Publication number: 20230048186Abstract: A neural transformer model with attention is trained to predict candidates to complete a line of source code with a zero-inference capability. The model is trained on an unsupervised training dataset that includes features from source code written in multiple programming languages. The features include a file-level context and a local context, where the file-level context includes a global context, a class context, a function context, and/or a method context for each class, function and/or method of the source code programs used in the training dataset. The local context includes method bodies, function bodies, and/or stand-alone code of main method routines. From these features, the model is able to learn to predict an ordered sequence of code elements that complete a line of source code in a programming language seen and not seen during training.Type: ApplicationFiled: November 1, 2022Publication date: February 16, 2023Inventors: COLIN BRUCE CLEMENT, SHUAI LU, NEELAKANTAN SUNDARESAN, ALEXEY SVYATKOVSKIY, DUYU TANG
-
Patent number: 11544474Abstract: Implementations of the subject matter described herein provide a solution for generating a text from the structured data. In this solution, the structured data is converted into its representation, where the structured data comprises a plurality of cells, and the representation of the structured data comprises plurality of representations of the plurality of cells. A natural language sentence associated with the structured data may be determined based on the representation of the structured data, thereby implementing the function of converting the structured data into a text.Type: GrantFiled: December 6, 2018Date of Patent: January 3, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Nan Duan, Yuanhua Lv, Ming Zhou, Duyu Tang
-
Patent number: 11513774Abstract: A neural transformer model with attention is trained to predict candidates to complete a line of source code with a zero-inference capability. The model is trained on an unsupervised training dataset that includes features from source code written in multiple programming languages. The features include a file-level context and a local context, where the file-level context includes a global context, a class context, a function context, and/or a method context for each class, function and/or method of the source code programs used in the training dataset. The local context includes method bodies, function bodies, and/or stand-alone code of main method routines. From these features, the model is able to learn to predict an ordered sequence of code elements that complete a line of source code in a programming language seen and not seen during training.Type: GrantFiled: January 3, 2021Date of Patent: November 29, 2022Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC.Inventors: Colin Bruce Clement, Shuai Lu, Neelakantan Sundaresan, Alexey Svyatkovskiy, Duyu Tang
-
Publication number: 20220214863Abstract: A neural transformer model with attention is trained to predict candidates to complete a line of source code with a zero-inference capability. The model is trained on an unsupervised training dataset that includes features from source code written in multiple programming languages. The features include a file-level context and a local context, where the file-level context includes a global context, a class context, a function context, and/or a method context for each class, function and/or method of the source code programs used in the training dataset. The local context includes method bodies, function bodies, and/or stand-alone code of main method routines. From these features, the model is able to learn to predict an ordered sequence of code elements that complete a line of source code in a programming language seen and not seen during training.Type: ApplicationFiled: January 3, 2021Publication date: July 7, 2022Inventors: COLIN BRUCE CLEMENT, SHUAI LU, NEELAKANTAN SUNDARESAN, ALEXEY SVYATKOVSKIY, DUYU TANG
-
Patent number: 11327971Abstract: In embodiments of the present disclosure, there is provided an assertion-based question answering manner. After a question and the related passage are obtained, an assertion answer to the question is determined based on content of the passage, and the assertion answer has a predetermined structure and represents a complete semantic meaning. Then, the assertion answer to the question may be outputted to the user. In the embodiments of the present disclosure, the question and the relevant passage are used as input, and a semi-structured assertion answer is output. The assertion answer according to embodiments of the present disclosure can provide richer semantic content than the traditional short answer, and provide a more concise expression than the traditional long answer, thereby ensuring accuracy of the answer while improving the user experience.Type: GrantFiled: December 6, 2018Date of Patent: May 10, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Duyu Tang, Nan Duan, Ming Zhou, Wendi Wang, Daxin Jiang, Shujie Liu, Linjun Shou, Ming Gong
-
Publication number: 20210406475Abstract: In embodiments of the present disclosure, there is provided an approach for fact checking based on semantic graphs. According to embodiments of the present disclosure, after obtaining a text to be fact checked, a plurality of evidence sentences related to the text are retrieved from an evidence database. Then, semantic graphs of the text and the evidence sentences are constructed based on the semantic analysis, and a veracity of a statement in the text can be determined based on the semantic graphs. Embodiments of the present disclosure propose a graph-based reasoning approach for fact checking, and use the constructed semantic graphs to facilitate verification of the truthfulness of the text, thereby improving the accuracy for fact checking.Type: ApplicationFiled: June 30, 2020Publication date: December 30, 2021Inventors: Duyu Tang, Nan Duan, Ming Zhou, Jiun-Hung Chen, Pengcheng Wang, Ying Qiao
-
Publication number: 20210319344Abstract: In accordance with implementations of the present disclosure, there is provided a solution for answering a question in a natural language conversation. In this solution, a question in a natural language conversation is received and converted into a logical representation corresponding to semantics of the question, the logical representation including a first sequence of actions executable on a knowledge base. An answer to the question is derived by executing the first sequence of actions on the knowledge base. This solution can accurately understand the semantics of a question in a multi-round conversation, so as to convert the questions into a sequence of actions executable on a large-scale knowledge base. In this way, the solution can effectively improve accuracy and efficiency of the natural language question answering system in question answering.Type: ApplicationFiled: June 20, 2019Publication date: October 14, 2021Inventors: Duyu Tang, Nan Duan, Ming ZHOU
-
Publication number: 20200356729Abstract: Implementations of the subject matter described herein provide a solution for generating a text from the structured data. In this solution, the structured data is converted into its representation, where the structured data comprises a plurality of cells, and the representation of the structured data comprises plurality of representations of the plurality of cells. A natural language sentence associated with the structured data may be determined based on the representation of the structured data, thereby implementing the function of converting the structured data into a text.Type: ApplicationFiled: December 6, 2018Publication date: November 12, 2020Inventors: Nan Duan, Yuanhua Lv, Ming Zhou, Duyu Tang
-
Publication number: 20200356556Abstract: In embodiments of the present disclosure, there is provided an assertion-based question answering manner. After a question and the related passage are obtained, an assertion answer to the question is determined based on content of the passage, and the assertion answer has a predetermined structure and represents a complete semantic meaning. Then, the assertion answer to the question may be outputted to the user. In the embodiments of the present disclosure, the question and the relevant passage are used as input, and a semi-structured assertion answer is output. The assertion answer according to embodiments of the present disclosure can provide richer semantic content than the traditional short answer, and provide a more concise expression than the traditional long answer, thereby ensuring accuracy of the answer while improving the user experience.Type: ApplicationFiled: December 6, 2018Publication date: November 12, 2020Inventors: Duyu Tang, Nan Duan, Ming Zhou, Wendi Wang, Daxin Jiang, Shujie Liu, Linjun Shou, Ming Gong