Patents by Inventor Chenguang Zhu
Chenguang Zhu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250165803Abstract: A federated artificial intelligence system executes machine learning models of a model chain in order of increasing computational complexity to determine a lowest computational complexity model to use to serve a quality response to a user request. A first machine learning model of the model chain performs an inference operation to produce first output based on the user request. A scoring machine learning model determines that the first output fails to meet a threshold. Based on such determination, a second machine learning model of the model chain performs a second inference operation to produce second output based on the user request, in which the second machine learning model has a higher computational complexity than the first machine learning model. The scoring machine learning model determines that the second output meets the threshold, and, based on such determination, the second output is transmitted in response to the user request.Type: ApplicationFiled: November 21, 2023Publication date: May 22, 2025Inventors: Xuedong Huang, Chenguang Zhu
-
Publication number: 20250111147Abstract: Systems and methods are provided for implementing automatic prompt optimization using textual gradients. In various embodiments, a feedback prompt, input into a large language model (“LLM”), is used to generate textual gradients that criticize a current prompt. The feedback prompt includes the current prompt and predictions that are incorrect compared with corresponding labels associated with minibatch data processed by the LLM using the current prompt. The textual gradients and current prompt are used in an editing prompt to the LLM to obtain a set of optimized prompts, which may be expanded using a paraphrasing prompt that is input into the LLM to generate a set of paraphrased prompts. A selection algorithm is used to select one or more optimized prompts from the set of optimized prompts and/or the set of paraphrased prompts, and the process is repeated with the selected one or more optimized prompts replacing the current prompt.Type: ApplicationFiled: September 29, 2023Publication date: April 3, 2025Applicant: Microsoft Technology Licensing, LLCInventors: Reid Allen PRYZANT, Jerry Zheng LI, Dan ITER, Yin Tat LEE, Chenguang ZHU, Nanshan ZENG, Anup Shirgaonkar
-
Publication number: 20250111133Abstract: Generally discussed herein are devices, systems, and methods for. A method can include receiving, from a user through a user interface, a segmentation granularity value indicating a number of events in the transcript to be included in a summary, extracting, by a ranker model and from the transcript, a number of hints equal to the number of events, generating, by a summarizer model that includes a re-trained language model, respective summaries, one for each event, of a portion of the transcript corresponding to the event, and providing the respective summaries as an overall summary of the transcript.Type: ApplicationFiled: March 25, 2022Publication date: April 3, 2025Inventors: Chenguang Zhu, Yang LIU, Nanshan ZENG, Xuedong HUANG, Ming ZHONG, Yuantao Wang, Wei XIONG
-
Patent number: 12243513Abstract: A speech module is joint trained with a knowledge module by transforming a first knowledge graph into an acoustic knowledge graph. The knowledge module is trained on the acoustic knowledge graph. Then, the knowledge module is integrated with the speech module to generate an integrated knowledge-speech module. In some instances, the speech module included in the integrated knowledge-speech module is aligned with a language module to generate an optimized speech model configured to leverage acoustic information and acoustic-based knowledge information, along with language information.Type: GrantFiled: May 18, 2021Date of Patent: March 4, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Nanshan Zeng
-
Publication number: 20250061277Abstract: The disclosure herein describes using a deep learning model to identify topic segments of a communication transcript. A communication transcript including a set of utterances is obtained. The set of utterances is divided into a plurality of utterance windows, wherein each utterance window of the plurality of utterance windows includes a different subset of utterances of the set of utterances, and wherein each utterance of the set of utterances is included in at least one utterance window of the plurality of utterance windows. For each utterance window of the plurality of utterance windows, each utterance in the utterance window is classified as a topic boundary or a non-boundary using a deep learning model. Topic segments of the communication transcript are identified based on utterances of the set of utterances that are classified as topic boundaries. A communication transcript summary is generated using the communication transcript and the identified topic segments.Type: ApplicationFiled: December 15, 2021Publication date: February 20, 2025Inventors: Chenguang ZHU, Yang LIU, David Peace HUNG, Nanshan ZENG
-
Publication number: 20240346254Abstract: The techniques described herein enhance the operations of natural language generation systems through training and/or augmentation by a large language model. In a first example, the large language model can execute training operations by processing a training dataset to produce a natural language output. The natural language generation system can analyze the training dataset and the natural language output to generate a natural language output mimicking the output of the large language model. The large language model can then evaluate the output of the natural language generation system to iteratively adjust and improve the quality of natural language outputs. In a second example, the large language can augment a small language model in executing natural language tasks. This is accomplished by retrieving external information using the large language model to generate an augmentation input to provide context and a language framework to the small language model to enhance overall outputs.Type: ApplicationFiled: April 12, 2023Publication date: October 17, 2024Inventors: Yang LIU, Yichong XU, Dan ITER, Chenguang ZHU, Nanshan ZENG, Shuohang WANG, Hiteshi SHARMA
-
Publication number: 20240340193Abstract: Systems and methods are provided for processing electronic content and generating corresponding output. Electronic content is received from a meeting, including recognizable speech content. This content is then summarized into real-time summary output by processing and encoding the meeting content while selectively alternating between unidirectional attention and bidirectional attention that is applied to the meeting contents.Type: ApplicationFiled: April 10, 2023Publication date: October 10, 2024Inventors: Chenguang ZHU, Xuedong HUANG, Zong Zong YUAN, Wei XIONG, Nanshan ZENG, Yuantao WANG
-
Publication number: 20240330165Abstract: Systems and methods are provided for implementing quality assurance for digital technologies using language model (“LM”)-based artificial intelligence (“AI”) and/or machine learning (“ML”) systems. In various embodiments, a first prompt is provided to an LM actor or attacker to cause the LM actor or attacker to generate interaction content for interacting with test software. Responses from the test software are then evaluated by an LM evaluator to produce evaluation results. In some examples, a second prompt is generated that includes the responses from the test software along with the evaluation criteria for the test software. When the second prompt is provided to the LM evaluator, the LM evaluator generates the evaluation results.Type: ApplicationFiled: April 3, 2023Publication date: October 3, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Reid Allen PRYZANT, Yin Tat LEE, Chenguang ZHU, Sebastien BUBECK, Ronen ELDAN, Yuwei FANG, Dan ITER, Yichong XU, Yuanzhi LI, Yi ZHANG, Lijuan QIN, Nanshan ZENG, Xuedong HUANG
-
Patent number: 11990132Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: GrantFiled: February 28, 2023Date of Patent: May 21, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Yu Shi, William Isaac Hinthorn, Nanshan Zeng, Ruochen Xu, Liyang Lu, Xuedong Huang
-
Patent number: 11875787Abstract: This document relates to machine learning. One example includes a method or technique that can be performed on a computing device. The method or technique can include obtaining a task-semantically-conditioned generative model that has been pretrained based at least on a first training data set having unlabeled training examples and semantically conditioned based at least on a second training data set having dialog act-labeled utterances. The method or technique can also include inputting dialog acts into the semantically-conditioned generative model and obtaining synthetic utterances that are output by the semantically-conditioned generative model. The method or technique can also include outputting the synthetic utterances.Type: GrantFiled: October 11, 2022Date of Patent: January 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Nanshan Zeng, Jianfeng Gao
-
Publication number: 20230376789Abstract: Systems and techniques are provided for facilitating the automatic discovery and application of rules for refining the training of pretrained models, such as natural language processing models. Weak symbolic rules are automatically generated from the identification and processing of sparse labeled data by the pretrained model(s). Once the weak rules are generated, they are integrated into the model(s) via an attention mechanism to supplement the direct training performed by the sparse labeled data and to thereby boost a supervision signal generated by the sparse labeled data on any newly processed unlabeled data in the intended runtime environment(s) where the models are applied.Type: ApplicationFiled: June 10, 2022Publication date: November 23, 2023Inventors: Reid Allen PRYZANT, Chenguang ZHU, Ziyi YANG, Yichong XU, Nanshan ZENG
-
Patent number: 11798529Abstract: A language module is joint trained with a knowledge module for natural language understanding by aligning a first knowledge graph with a second knowledge graph. The knowledge module is trained on the aligned knowledge graphs. Then, the knowledge module is integrated with the language module to generate an integrated knowledge-language module.Type: GrantFiled: May 18, 2021Date of Patent: October 24, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Nanshan Zeng
-
Publication number: 20230229960Abstract: Some disclosed systems are configured to obtain a knowledge module configured to receive one or more knowledge inputs corresponding to one or more different modalities and generate a set of knowledge embeddings to be integrated with a set of multi-modal embeddings generated by a multi-modal main model. The systems receive a knowledge input at the knowledge module, identify a knowledge type associated with the knowledge input, and extract a knowledge unit from the knowledge input. The systems select a representation model that corresponds to the knowledge type and select a grounding type configured to ground the at least one knowledge unit into the representation model. The systems then ground the knowledge unit into the representation model according to the grounding type.Type: ApplicationFiled: January 19, 2022Publication date: July 20, 2023Inventors: Chenguang ZHU, Lu YUAN, Yao QIAN, Yu SHI, Nanshan ZENG, Xuedong David HUANG
-
Publication number: 20230205985Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: ApplicationFiled: February 28, 2023Publication date: June 29, 2023Inventors: Chenguang ZHU, Yu SHI, William Isaac HINTHORN, Nanshan ZENG, Rouchen XU, Liyang LU, Xuedong HUANG
-
Patent number: 11615799Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: GrantFiled: May 29, 2020Date of Patent: March 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Yu Shi, William Isaac Hinthorn, Nanshan Zeng, Ruochen Xu, Liyang Lu, Xuedong Huang
-
Publication number: 20230076095Abstract: This document relates to machine learning. One example includes a method or technique that can be performed on a computing device. The method or technique can include obtaining a task-adapted generative model that has been tuned using one or more task-specific seed examples. The method or technique can also include inputting dialog acts into the task-adapted generative model and obtaining synthetic utterances that are output by the task-adapted generative model. The method or technique can also include populating a synthetic training corpus with synthetic training examples that include the synthetic utterances. The synthetic training corpus may be suitable for training a natural language understanding model.Type: ApplicationFiled: October 11, 2022Publication date: March 9, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Nanshan Zeng, Jianfeng Gao
-
Patent number: 11508360Abstract: This document relates to machine learning. One example includes a method or technique that can be performed on a computing device. The method or technique can include obtaining a task-adapted generative model that has been tuned using one or more task-specific seed examples. The method or technique can also include inputting dialog acts into the task-adapted generative model and obtaining synthetic utterances that are output by the task-adapted generative model. The method or technique can also include populating a synthetic training corpus with synthetic training examples that include the synthetic utterances. The synthetic training corpus may be suitable for training a natural language understanding model.Type: GrantFiled: September 15, 2020Date of Patent: November 22, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Nanshan Zeng, Jianfeng Gao
-
Patent number: 11403304Abstract: According to one or more embodiments, operations may include gathering a set of machine learning (ML) projects from one or more repositories of ML projects based on a filtering criteria. The operations may also include ensuring executability of ML pipelines in the set of ML projects. In addition, the operations may include identifying irrelevant portions of the ML pipelines in the set of ML projects. Moreover, the operations may include generating quality features for the set of ML projects. In addition, the operations may include generating diversity features for the set of ML projects. Moreover, the operations may include selecting a subset of ML projects from the set of ML projects based on the quality features and the diversity features. In addition, the operations may include storing the subset of ML projects in a corpus of ML projects that may be adapted for use in new ML projects.Type: GrantFiled: September 2, 2020Date of Patent: August 2, 2022Assignee: FUJITSU LIMITEDInventors: Ripon K. Saha, Mukul R. Prasad, Chenguang Zhu
-
Publication number: 20220230625Abstract: A language module is joint trained with a knowledge module for natural language understanding by aligning a first knowledge graph with a second knowledge graph. The knowledge module is trained on the aligned knowledge graphs. Then, the knowledge module is integrated with the language module to generate an integrated knowledge-language module.Type: ApplicationFiled: May 18, 2021Publication date: July 21, 2022Inventors: Chenguang ZHU, Nanshan ZENG
-
Publication number: 20220230628Abstract: A system is provided for generating an optimized speech model by training a knowledge module on a knowledge graph. A language module is trained on unlabeled text data and a speech module is trained on unlabeled acoustic data. The knowledge module is integrated with the language module to perform semantic analysis using knowledge-graph based information. The speech module is then aligned to the language module of the integrated knowledge-language module. The speech module is then configured as an optimized speech model configured to leverage acoustic and language information in natural language processing tasks.Type: ApplicationFiled: May 18, 2021Publication date: July 21, 2022Inventors: Chenguang ZHU, Nanshan ZENG