Patents by Inventor Nanshan Zeng
Nanshan Zeng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11990132Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: GrantFiled: February 28, 2023Date of Patent: May 21, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Yu Shi, William Isaac Hinthorn, Nanshan Zeng, Ruochen Xu, Liyang Lu, Xuedong Huang
-
Publication number: 20240062018Abstract: Systems and methods are provided for training and using a novel unified language foundation model. An encoder-decoder natural language model is obtained and various training data is obtained and used for training. The training process integrates a combination of replaced token detection, corrupted span reconstruction, and disentangled attention methodologies to produce a unified encoder-decoder model. The trained model is trained for performing both natural language understanding (NLU) tasks and natural language generation (NLG) tasks. Attention applied to the model is applied discretely to segmented chunks of encoded data during processing to improve the efficiency of applying attention by the model.Type: ApplicationFiled: October 20, 2022Publication date: February 22, 2024Inventors: Pengcheng HE, Jianfeng GAO, Nanshan ZENG, Xuedong HUANG, Wei XIONG, Baolin PENG
-
Publication number: 20240062020Abstract: Systems and methods are provided for training and using a novel unified language foundation model. An encoder-decoder natural language model is obtained and various training data is obtained and used for training. The training process integrates a combination of replaced token detection, corrupted span reconstruction, and disentangled attention methodologies to produce a unified encoder-decoder model. The trained model is trained for performing both natural language understanding (NLU) tasks and natural language generation (NLG) tasks. Attention applied to the model is applied discretely to segmented chunks of encoded data during processing to improve the efficiency of applying attention by the model.Type: ApplicationFiled: October 20, 2022Publication date: February 22, 2024Inventors: Pengcheng HE, Jianfeng GAO, Nanshan ZENG, Xuedong HUANG, Wei XIONG, Baolin PENG
-
Patent number: 11875796Abstract: A computer implemented method includes receiving information streams on a meeting server from a set of multiple distributed devices included in a meeting, receiving audio signals representative of speech by at least two users in at least two of the information streams, receiving at least one video signal of at least one user in the information streams, associating a specific user with speech in the received audio signals as a function of the received audio and video signals, and generating a transcript of the meeting with an indication of the specific user associated with the speech.Type: GrantFiled: April 30, 2019Date of Patent: January 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Lijuan Qin, Nanshan Zeng, Dimitrios Basile Dimitriadis, Zhuo Chen, Andreas Stolcke, Takuya Yoshioka, William Isaac Hinthorn, Xuedong Huang
-
Patent number: 11875787Abstract: This document relates to machine learning. One example includes a method or technique that can be performed on a computing device. The method or technique can include obtaining a task-semantically-conditioned generative model that has been pretrained based at least on a first training data set having unlabeled training examples and semantically conditioned based at least on a second training data set having dialog act-labeled utterances. The method or technique can also include inputting dialog acts into the semantically-conditioned generative model and obtaining synthetic utterances that are output by the semantically-conditioned generative model. The method or technique can also include outputting the synthetic utterances.Type: GrantFiled: October 11, 2022Date of Patent: January 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Nanshan Zeng, Jianfeng Gao
-
Publication number: 20230376789Abstract: Systems and techniques are provided for facilitating the automatic discovery and application of rules for refining the training of pretrained models, such as natural language processing models. Weak symbolic rules are automatically generated from the identification and processing of sparse labeled data by the pretrained model(s). Once the weak rules are generated, they are integrated into the model(s) via an attention mechanism to supplement the direct training performed by the sparse labeled data and to thereby boost a supervision signal generated by the sparse labeled data on any newly processed unlabeled data in the intended runtime environment(s) where the models are applied.Type: ApplicationFiled: June 10, 2022Publication date: November 23, 2023Inventors: Reid Allen PRYZANT, Chenguang ZHU, Ziyi YANG, Yichong XU, Nanshan ZENG
-
Publication number: 20230368782Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.Type: ApplicationFiled: July 3, 2023Publication date: November 16, 2023Inventors: Yao QIAN, Yu WU, Kenichi KUMATANI, Shujie LIU, Furu WEI, Nanshan ZENG, Xuedong David HUANG, Chengyi WANG
-
Patent number: 11798529Abstract: A language module is joint trained with a knowledge module for natural language understanding by aligning a first knowledge graph with a second knowledge graph. The knowledge module is trained on the aligned knowledge graphs. Then, the knowledge module is integrated with the language module to generate an integrated knowledge-language module.Type: GrantFiled: May 18, 2021Date of Patent: October 24, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Nanshan Zeng
-
Patent number: 11735171Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.Type: GrantFiled: May 14, 2021Date of Patent: August 22, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Yao Qian, Yu Wu, Kenichi Kumatani, Shujie Liu, Furu Wei, Nanshan Zeng, Xuedong David Huang, Chengyi Wang
-
Publication number: 20230229960Abstract: Some disclosed systems are configured to obtain a knowledge module configured to receive one or more knowledge inputs corresponding to one or more different modalities and generate a set of knowledge embeddings to be integrated with a set of multi-modal embeddings generated by a multi-modal main model. The systems receive a knowledge input at the knowledge module, identify a knowledge type associated with the knowledge input, and extract a knowledge unit from the knowledge input. The systems select a representation model that corresponds to the knowledge type and select a grounding type configured to ground the at least one knowledge unit into the representation model. The systems then ground the knowledge unit into the representation model according to the grounding type.Type: ApplicationFiled: January 19, 2022Publication date: July 20, 2023Inventors: Chenguang ZHU, Lu YUAN, Yao QIAN, Yu SHI, Nanshan ZENG, Xuedong David HUANG
-
Publication number: 20230205985Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: ApplicationFiled: February 28, 2023Publication date: June 29, 2023Inventors: Chenguang ZHU, Yu SHI, William Isaac HINTHORN, Nanshan ZENG, Rouchen XU, Liyang LU, Xuedong HUANG
-
Patent number: 11615799Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: GrantFiled: May 29, 2020Date of Patent: March 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Yu Shi, William Isaac Hinthorn, Nanshan Zeng, Ruochen Xu, Liyang Lu, Xuedong Huang
-
Publication number: 20230076095Abstract: This document relates to machine learning. One example includes a method or technique that can be performed on a computing device. The method or technique can include obtaining a task-adapted generative model that has been tuned using one or more task-specific seed examples. The method or technique can also include inputting dialog acts into the task-adapted generative model and obtaining synthetic utterances that are output by the task-adapted generative model. The method or technique can also include populating a synthetic training corpus with synthetic training examples that include the synthetic utterances. The synthetic training corpus may be suitable for training a natural language understanding model.Type: ApplicationFiled: October 11, 2022Publication date: March 9, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Nanshan Zeng, Jianfeng Gao
-
Patent number: 11545156Abstract: Attributes of electronic content from a meeting are identified and evaluated to determine whether sub-portions of the electronic content should or should not be attributed to a user profile. Upon determining that the sub-portion should be attributed to a user profile, attributes of the sub-portion of electronic content are compared to attributes of stored user profiles. A probability that the sub-portion corresponds to at least one stored user profile is calculated. Based on the calculated probability, the sub-portion is attributed to a stored user profile or a guest user profile.Type: GrantFiled: May 27, 2020Date of Patent: January 3, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Nanshan Zeng, Wei Xiong, Lingfeng Wu, Jun Zhang, Shayin Jing
-
Patent number: 11508360Abstract: This document relates to machine learning. One example includes a method or technique that can be performed on a computing device. The method or technique can include obtaining a task-adapted generative model that has been tuned using one or more task-specific seed examples. The method or technique can also include inputting dialog acts into the task-adapted generative model and obtaining synthetic utterances that are output by the task-adapted generative model. The method or technique can also include populating a synthetic training corpus with synthetic training examples that include the synthetic utterances. The synthetic training corpus may be suitable for training a natural language understanding model.Type: GrantFiled: September 15, 2020Date of Patent: November 22, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Nanshan Zeng, Jianfeng Gao
-
Publication number: 20220366898Abstract: Systems and methods are provided for training a machine learning model to learn speech representations. Labeled speech data or both labeled and unlabeled data sets is applied to a feature extractor of a machine learning model to generate latent speech representations. The latent speech representations are applied to a quantizer to generate quantized latent speech representations and to a transformer context network to generate contextual representations. Each contextual representation included in the contextual representations is aligned with a phoneme label to generate phonetically-aware contextual representations. Quantized latent representations are aligned with phoneme labels to generate phonetically aware latent speech representations.Type: ApplicationFiled: May 14, 2021Publication date: November 17, 2022Inventors: Yao QIAN, Yu WU, Kenichi KUMATANI, Shujie LIU, Furu WEI, Nanshan ZENG, Xuedong David HUANG, Chengyi WANG
-
Patent number: 11488599Abstract: The present disclosure provides method and apparatus for processing a message. A statement sentence message and a message processing parameter associated with a user's session message are obtained. One or more first statement sentence nodes that are semantic-matched with the statement sentence message are determined in the knowledge map. One or more second statement sentence nodes corresponding to the message processing parameters are obtained from the knowledge map, based on the node relationship properties of the first statement sentence nodes. A response is generated based at least in part on statement sentences of the one or more second statement sentence nodes. The generated response is provided to the user.Type: GrantFiled: April 6, 2019Date of Patent: November 1, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ling Chen, Yu Shi, Yining Chen, Nanshan Zeng, Dong Li
-
Patent number: 11468895Abstract: A computer implemented method includes receiving audio streams at a meeting server from two distributed devices that are streaming audio captured during an ad-hoc meeting between at least two users, comparing the received audio streams to determine that the received audio streams are representative of sound from the ad-hoc meeting, generating a meeting instance to process the audio streams in response to the comparing determining that the audio streams are representative of sound from the ad-hoc meeting, and processing the received audio streams to generate a transcript of the ad-hoc meeting.Type: GrantFiled: April 30, 2019Date of Patent: October 11, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
-
Publication number: 20220230625Abstract: A language module is joint trained with a knowledge module for natural language understanding by aligning a first knowledge graph with a second knowledge graph. The knowledge module is trained on the aligned knowledge graphs. Then, the knowledge module is integrated with the language module to generate an integrated knowledge-language module.Type: ApplicationFiled: May 18, 2021Publication date: July 21, 2022Inventors: Chenguang ZHU, Nanshan ZENG
-
Publication number: 20220230629Abstract: A speech module is joint trained with a knowledge module by transforming a first knowledge graph into an acoustic knowledge graph. The knowledge module is trained on the acoustic knowledge graph. Then, the knowledge module is integrated with the speech module to generate an integrated knowledge-speech module. In some instances, the speech module included in the integrated knowledge-speech module is aligned with a language module to generate an optimized speech model configured to leverage acoustic information and acoustic-based knowledge information, along with language information.Type: ApplicationFiled: May 18, 2021Publication date: July 21, 2022Inventors: Chenguang ZHU, Nanshan ZENG