Patents by Inventor Xuedong Huang
Xuedong Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11947699Abstract: Embodiments are provided for securing data access to machine learning training data at a plurality of distributed computing devices. Electronic content including original data that corresponds to a preferred data security level is divided into a plurality of microsegments. The plurality of microsegments is restrictively distributed to a plurality of computing devices which apply transcription labels to the plurality of microsegments. The labeled microsegments are reconstructed into training data which is then used to train a machine learning model while facilitating an improvement in data security of the original data included with the training data from the reconstructed microsegments.Type: GrantFiled: April 30, 2021Date of Patent: April 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Hemant Malhotra, Xuedong Huang, Li Jiang, Ivo Jose Garcia Dos Santos, Dong Li, Shuangyu Chang
-
Publication number: 20240073622Abstract: A sound generator provided in the present disclosure includes a frame, a magnetic circuit unit, and a first vibration unit and a second vibration unit arranged on two sides of the magnetic circuit unit. The magnetic circuit unit includes a first central magnetic yoke in the middle, a central magnet fixed to the first central magnetic yoke, a magnetic component arranged around the central magnet and fixed to the frame, and a connecting portion connecting the first central magnetic yoke to the magnetic component. The central magnet includes a first magnet portion fixed to the first central magnetic yoke and a second magnet portion fixed to the side of the first magnet portion away from the first central magnetic yoke, a projection area of the first magnet portion along a vibrating direction is greater than a projection area of the second magnet portion along the vibrating direction.Type: ApplicationFiled: January 16, 2023Publication date: February 29, 2024Inventors: Xuedong Lv, Xiaoqiong Feng, Kun Yang, Zhen Huang, Yi Shao
-
Publication number: 20240073615Abstract: The present disclosure discloses a sound device includes a frame, a magnet system, and a first vibration system and a second vibration system arranged on two sides of a magnet system. The magnet system includes a first central yoke, a central magnet mounted on the first central yoke, a side yoke surrounding the central magnet and fixed to the frame, and a connection portion connecting the first central yoke and the side yoke. The side yoke includes a first side yoke fixed to the frame and a second side yoke bending and extending from an edge of the first side yoke towards the central magnet; the connection portion connects the first central yoke and the second side yoke. The sound device in the present disclosure has higher magnetic ability and miniaturization ability.Type: ApplicationFiled: December 2, 2022Publication date: February 29, 2024Inventors: Xuedong Lv, Xiaoqiong Feng, Kun Yang, Zhen Huang, Yi Shao
-
Publication number: 20240062018Abstract: Systems and methods are provided for training and using a novel unified language foundation model. An encoder-decoder natural language model is obtained and various training data is obtained and used for training. The training process integrates a combination of replaced token detection, corrupted span reconstruction, and disentangled attention methodologies to produce a unified encoder-decoder model. The trained model is trained for performing both natural language understanding (NLU) tasks and natural language generation (NLG) tasks. Attention applied to the model is applied discretely to segmented chunks of encoded data during processing to improve the efficiency of applying attention by the model.Type: ApplicationFiled: October 20, 2022Publication date: February 22, 2024Inventors: Pengcheng HE, Jianfeng GAO, Nanshan ZENG, Xuedong HUANG, Wei XIONG, Baolin PENG
-
Publication number: 20240062020Abstract: Systems and methods are provided for training and using a novel unified language foundation model. An encoder-decoder natural language model is obtained and various training data is obtained and used for training. The training process integrates a combination of replaced token detection, corrupted span reconstruction, and disentangled attention methodologies to produce a unified encoder-decoder model. The trained model is trained for performing both natural language understanding (NLU) tasks and natural language generation (NLG) tasks. Attention applied to the model is applied discretely to segmented chunks of encoded data during processing to improve the efficiency of applying attention by the model.Type: ApplicationFiled: October 20, 2022Publication date: February 22, 2024Inventors: Pengcheng HE, Jianfeng GAO, Nanshan ZENG, Xuedong HUANG, Wei XIONG, Baolin PENG
-
Patent number: 11875796Abstract: A computer implemented method includes receiving information streams on a meeting server from a set of multiple distributed devices included in a meeting, receiving audio signals representative of speech by at least two users in at least two of the information streams, receiving at least one video signal of at least one user in the information streams, associating a specific user with speech in the received audio signals as a function of the received audio and video signals, and generating a transcript of the meeting with an indication of the specific user associated with the speech.Type: GrantFiled: April 30, 2019Date of Patent: January 16, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Lijuan Qin, Nanshan Zeng, Dimitrios Basile Dimitriadis, Zhuo Chen, Andreas Stolcke, Takuya Yoshioka, William Isaac Hinthorn, Xuedong Huang
-
Publication number: 20230402038Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: ApplicationFiled: May 15, 2023Publication date: December 14, 2023Inventors: Adi DIAMANT, Xuedong HUANG, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Amit Kumar AGARWAL
-
Publication number: 20230205985Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: ApplicationFiled: February 28, 2023Publication date: June 29, 2023Inventors: Chenguang ZHU, Yu SHI, William Isaac HINTHORN, Nanshan ZENG, Rouchen XU, Liyang LU, Xuedong HUANG
-
Patent number: 11688399Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: GrantFiled: December 8, 2020Date of Patent: June 27, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adi Diamant, Karen Master Ben-Dor, Eyal Krupka, Raz Halaly, Yoni Smolin, Ilya Gurvich, Aviv Hurvitz, Lijuan Qin, Wei Xiong, Shixiong Zhang, Lingfeng Wu, Xiong Xiao, Ido Leichter, Moshe David, Xuedong Huang, Amit Kumar Agarwal
-
Patent number: 11687736Abstract: Systems and methods may be used to provide transcription and translation services. A method may include initializing a plurality of user devices with respective language output selections in a translation group by receiving a shared identifier from the plurality of user devices and transcribing the audio stream to transcribed text. The method may include translating the transcribed text to one or more of the respective language output selections when an original language of the transcribed text differs from the one or more of the respective language output selections. The method may include sending, a user device in the translation group, the transcribed text including translated text in a language corresponding to the respective language output selection for the user device. In an example, the method may include customizing the transcription or the translation, such as to a particular topic, location, user, or the like.Type: GrantFiled: October 23, 2020Date of Patent: June 27, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: William D. Lewis, Ivo José Garcia Dos Santos, Tanvi Surti, Arul A. Menezes, Olivier Nano, Christian Wendt, Xuedong Huang
-
Publication number: 20230153451Abstract: Embodiments are provided for securing data access to machine learning training data at a plurality of distributed computing devices. Electronic content including original data that corresponds to a preferred data security level is divided into a plurality of microsegments. The plurality of microsegments is restrictively distributed to a plurality of computing devices which apply transcription labels to the plurality of microsegments. The labeled microsegments are reconstructed into training data which is then used to train a machine learning model while facilitating an improvement in data security of the original data included with the training data from the reconstructed microsegments.Type: ApplicationFiled: April 30, 2021Publication date: May 18, 2023Inventors: Hemant MALHOTRA, Xuedong HUANG, Li JIANG, Ivo Jose GARCIA DOS SANTOS, Dong LI, Shuangyu CHANG
-
Publication number: 20230116052Abstract: Examples of array geometry agnostic multi-channel personalized speech enhancement (PSE) extract speaker embeddings, which represent acoustic characteristics of one or more target speakers, from target speaker enrollment data. Spatial features (e.g., inter-channel phase difference) are extracted from input audio captured by a microphone array. The input audio includes a mixture of speech data of the target speaker(s) and one or more interfering speaker(s). The input audio, the extracted speaker embeddings, and the extracted spatial features are provided to a trained geometry-agnostic PSE model. Output data is produced, which comprises estimated clean speech data of the target speaker(s) that has a reduction (or elimination) of speech data of the interfering speaker(s), without the trained PSE model requiring geometry information for the microphone array.Type: ApplicationFiled: December 17, 2021Publication date: April 13, 2023Inventors: Sefik Emre ESKIMEZ, Takuya YOSHIOKA, Huaming WANG, Hassan TAHERIAN, Zhuo CHEN, Xuedong HUANG
-
Patent number: 11615799Abstract: A transcription of audio speech included in electronic content associated with a meeting is created by an ASR model trained on speech-to-text data. The transcription is post-processed by modifying text included in the transcription, for example, by modifying punctuation, grammar, or formatting introduced by the ASR model and by changing or omitting one or more words that were included in both the audio speech and the transcription. After the transcription is post-processed, output based on the post-processed transcription is generated in the form of a meeting summary and/or template.Type: GrantFiled: May 29, 2020Date of Patent: March 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Chenguang Zhu, Yu Shi, William Isaac Hinthorn, Nanshan Zeng, Ruochen Xu, Liyang Lu, Xuedong Huang
-
Patent number: 11468895Abstract: A computer implemented method includes receiving audio streams at a meeting server from two distributed devices that are streaming audio captured during an ad-hoc meeting between at least two users, comparing the received audio streams to determine that the received audio streams are representative of sound from the ad-hoc meeting, generating a meeting instance to process the audio streams in response to the comparing determining that the audio streams are representative of sound from the ad-hoc meeting, and processing the received audio streams to generate a transcript of the ad-hoc meeting.Type: GrantFiled: April 30, 2019Date of Patent: October 11, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
-
Publication number: 20220310058Abstract: Systems are configured for generating text-to-speech data in a personalized voice by training a neural text-to-speech machine learning model on natural speech data collected from a particular user, validating the identity of the user from which data is collected, and authorizing requests from users to use the personalized voice in generating new speech data. The systems are further configured to train a machine learning model as a neural text-to-speech model with generated personalized speech data.Type: ApplicationFiled: November 3, 2020Publication date: September 29, 2022Inventors: Sheng ZHAO, Li JIANG, Xuedong HUANG, Lijuan QIN, Lei HE, Binggong DING, Bo YAN, Chunling MA, Raunak OBEROI
-
Publication number: 20220230642Abstract: A computer implemented method processes audio streams recorded during a meeting by a plurality of distributed devices.Type: ApplicationFiled: April 4, 2022Publication date: July 21, 2022Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan ZENG, Lijuan QIN, William Isaac Hinthorn, Xuedong HUANG
-
Publication number: 20220180869Abstract: Systems, methods, and computer-readable storage devices are disclosed for generating smart notes for a meeting based on participant actions and machine learning. One method including: receiving meeting data from a plurality of participant devices participating in an online meeting; continuously generating text data based on the received audio data from each participant device of the plurality of participant devices; iteratively performing the following steps until receiving meeting data for the meeting has ended, the steps including: receiving an indication that a predefined action has occurred on the first participating device; generating a participant segment of the meeting data for at least the first participant device from a first predetermined time before when the predefined action occurred to when the predefined action occurred; determining whether the receiving meeting data of the meeting has ended; and generating a summary of the meeting.Type: ApplicationFiled: November 18, 2021Publication date: June 9, 2022Inventors: Heiko Rahmel, Li-Juan Qin, Xuedong Huang, Wei Xiong
-
Patent number: 11322148Abstract: A computer implemented method processes audio streams recorded during a meeting by a plurality of distributed devices.Type: GrantFiled: April 30, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang
-
Publication number: 20220036178Abstract: The disclosure herein describes training a global model based on a plurality of data sets. The global model is applied to each data set of the plurality of data sets and a plurality of gradients is generated based on that application. At least one gradient quality metric is determined for each gradient of the plurality of gradients. Based on the determined gradient quality metrics of the plurality of gradients, a plurality of weight factors is calculated. The plurality of gradients is transformed into a plurality of weighted gradients based on the calculated plurality of weight factors and a global gradient is generated based on the plurality of weighted gradients. The global model is updated based on the global gradient, wherein the updated global model, when applied to a data set, performs a task based on the data set and provides model output based on performing the task.Type: ApplicationFiled: July 31, 2020Publication date: February 3, 2022Inventors: Dimitrios B. DIMITRIADIS, Kenichi KUMATANI, Robert Peter GMYR, Masaki ITAGAKI, Yashesh GAUR, Nanshan ZENG, Xuedong HUANG
-
Publication number: 20210407516Abstract: A computer implemented method includes receiving audio signals representative of speech via multiple audio streams transmitted from corresponding multiple distributed devices, performing, via a neural network model, continuous speech separation for one or more of the received audio signals having overlapped speech, and providing the separated speech on a fixed number of separate output audio channels.Type: ApplicationFiled: September 13, 2021Publication date: December 30, 2021Inventors: Takuya Yoshioka, Andreas Stolcke, Zhuo Chen, Dimitrios Basile Dimitriadis, Nanshan Zeng, Lijuan Qin, William Isaac Hinthorn, Xuedong Huang