Patents by Inventor Dongyan Huang
Dongyan Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11996112Abstract: The present disclosure discloses a voice conversion method. The method includes: obtaining a to-be-converted voice, and extracting acoustic features of the to-be-converted voice; obtaining a source vector corresponding to the to-be-converted voice from a source vector pool, and selecting a target vector corresponding to the target voice from the target vector pool; obtaining acoustic features of the target voice output by the voice conversion model by using the acoustic features of the to-be-converted voice, the source vector corresponding to the to-be-converted voice, and the target vector corresponding to the target voice as an input of the voice conversion model; and obtaining the target voice by converting the acoustic features of the target voice using a vocoder. In addition, a voice conversion apparatus and a storage medium are also provided.Type: GrantFiled: October 30, 2020Date of Patent: May 28, 2024Assignee: UBTECH ROBOTICS CORP LTDInventors: Ruotong Wang, Zhichao Tang, Dongyan Huang, Jiebin Xie, Zhiyuan Zhao, Yang Liu, Youjun Xiong
-
Patent number: 11941366Abstract: The present disclosure discloses a context-based multi-turn dialogue method.Type: GrantFiled: November 23, 2020Date of Patent: March 26, 2024Assignee: UBTECH ROBOTICS CORP LTDInventors: Chi Shao, Dongyan Huang, Wan Ding, Youjun Xiong
-
Publication number: 20230386116Abstract: A method for generating a talking head video includes: obtaining a text and an image containing a face of a user; determining a phoneme sequence that corresponds to the text and includes one or more phonemes; determining acoustic features corresponding to the text according to the phoneme sequence, and obtaining synthesized speech corresponding to the text according to the acoustic features; determining a first mouth movement sequence corresponding to the text according to the phoneme sequence, and determining a second mouth movement sequence corresponding to the text according to the acoustic features; creating a facial action video corresponding to the user according to the first mouth movement sequence, the second mouth movement sequence and the image; and processing the synthesized speech and the facial action video synchronously to obtain a talking head video corresponding to the user.Type: ApplicationFiled: May 26, 2023Publication date: November 30, 2023Inventors: WAN DING, Dongyan Huang, Linhuang Yan, Zhiyong Yang
-
Patent number: 11763796Abstract: A computer-implemented method for speech synthesis, a computer device, and a non-transitory computer readable storage medium are provided. The method includes: obtaining a speech text to be synthesized; obtaining a Mel spectrum corresponding to the speech text to be synthesized according to the speech text to be synthesized; inputting the Mel spectrum into a complex neural network, and obtaining a complex spectrum corresponding to the speech text to be synthesized, wherein the complex spectrum comprises real component information and imaginary component information; and obtaining a synthetic speech corresponding to the speech text to be synthesized, according to the complex spectrum. The method can efficiently and simply complete speech synthesis.Type: GrantFiled: December 10, 2020Date of Patent: September 19, 2023Assignee: UBTECH ROBOTICS CORP LTDInventors: Dongyan Huang, Leyuan Sheng, Youjun Xiong
-
Publication number: 20230206895Abstract: A speech synthesis method includes: obtaining an acoustic feature sequence of a text to be processed; processing the acoustic feature sequence by using a non-autoregressive computing model in parallel to obtain first audio information of the text, to be processed, wherein the first audio information comprises audio corresponding to each segment; processing the acoustic feature sequence and the first audio information by using an autoregressive computing model to obtain a residual value corresponding to each segment; and obtaining second audio information corresponding to an i-th segment based on the first audio information corresponding to the i-th segment and the residual values corresponding to a first to an (i-1)-th segment, wherein a synthesized audio of the text to be processed comprises each of the second audio information, i=1, 2 . . . n, n is a total number of the segments.Type: ApplicationFiled: December 28, 2022Publication date: June 29, 2023Inventors: Wan Ding, Dongyan Huang, Zhiyuan Zhao, Zhiyong Yang
-
Patent number: 11645474Abstract: A computer-implemented method for text conversion, a computer device, and a non-transitory computer readable storage medium are provided. The method includes: obtaining a text to be converted; performing a non-standard word recognition on the text to be converted, to determine whether the text to be converted includes a non-standard word; recognizing the non-standard word in the text to be converted by using an eXtreme Gradient Boosting model in response to the text to be converted including the non-standard word; and obtaining a target converted text corresponding to the text to be converted, according to a recognition result outputted by the eXtreme Gradient Boosting model. The method has a faster recognition speed and a higher recognition accuracy compared with the deep learning model.Type: GrantFiled: December 24, 2020Date of Patent: May 9, 2023Assignee: UBTECH ROBOTICS CORP LTDInventors: Zhongfa Feng, Dongyan Huang, Youjun Xiong
-
Patent number: 11417316Abstract: The present disclosure provides a speech synthesis method as well as an apparatus and a computer readable storage medium using the same. The method includes: obtaining a to-be-synthesized text, and extracting to-be-processed Mel spectrum features of the to-be-synthesized text through a preset speech feature extraction algorithm; inputting the to-be-processed Mel spectrum features into a preset ResUnet network model to obtain first intermediate features; performing an average pooling and a first down sampling on the to-be-processed Mel spectrum features to obtain second intermediate features; taking the second intermediate features and the first intermediate features output by the ResUnet network model as an input to perform a deconvolution and a first up sampling so as to obtain target Mel spectrum features corresponding to the to-be-processed Mel spectrum features; and converting the target Mel spectrum features into a target speech corresponding to the to-be-synthesized text.Type: GrantFiled: December 8, 2020Date of Patent: August 16, 2022Assignee: UBTECH ROBOTICS CORP LTDInventors: Dongyan Huang, Leyuan Sheng, Youjun Xiong
-
Patent number: 11367456Abstract: The present disclosure provides a streaming voice conversion method as well as an apparatus and a computer readable storage medium using the same. The method includes: obtaining to-be-converted voice data; partitioning the to-be-converted voice data in an order of data obtaining time as a plurality of to-be-converted partition voices, where the to-be-converted partition voice data carries a partition mark; performing a voice conversion on each of the to-be-converted partition voices to obtain a converted partition voice, where the converted partition voice carries a partition mark; performing a partition restoration on each of the converted partition voices to obtain a restored partition voice, where the restored partition voice carries a partition mark; and outputting each of the restored partition voices according to the partition mark carried by the restored partition voice. In this manner, the response time is shortened, and the conversion speed is improved.Type: GrantFiled: December 3, 2020Date of Patent: June 21, 2022Assignee: UBTECH ROBOTICS CORP LTDInventors: Jiebin Xie, Ruotong Wang, Dongyan Huang, Zhichao Tang, Yang Liu, Youjun Xiong
-
Publication number: 20220189454Abstract: A computer-implemented method for speech synthesis, a computer device, and a non-transitory computer readable storage medium are provided. The method includes: obtaining a speech text to be synthesized; obtaining a Mel spectrum corresponding to the speech text to be synthesized according to the speech text to be synthesized; inputting the Mel spectrum into a complex neural network, and obtaining a complex spectrum corresponding to the speech text to be synthesized, wherein the complex spectrum comprises real component information and imaginary component information; and obtaining a synthetic speech corresponding to the speech text to be synthesized, according to the complex spectrum. The method can efficiently and simply complete speech synthesis.Type: ApplicationFiled: December 10, 2020Publication date: June 16, 2022Inventors: Dongyan Huang, Leyuan Sheng, Youjun Xiong
-
Patent number: 11282503Abstract: The present disclosure discloses a voice conversion training method. The method includes: forming a first training data set including a plurality of training voice data groups; selecting two of the training voice data groups from the first training data set to input into a voice conversion neural network for training; forming a second training data set including the first training data set and a first source speaker voice data group; inputting one of the training voice data groups selected from the first training data set and the first source speaker voice data group into the network for training; forming the third training data set including the second source speaker voice data group and the personalized voice data group that are parallel corpus with respect to each other; and inputting the second source speaker voice data group and the personalized voice data group into the network for training.Type: GrantFiled: November 12, 2020Date of Patent: March 22, 2022Assignee: UBTECH ROBOTICS CORP LTDInventors: Ruotong Wang, Dongyan Huang, Xian Li, Jiebin Xie, Zhichao Tang, Wan Ding, Yang Liu, Bai Li, Youjun Xiong
-
Publication number: 20210200961Abstract: The present disclosure discloses a context-based multi-turn dialogue method.Type: ApplicationFiled: November 23, 2020Publication date: July 1, 2021Inventors: Chi Shao, Dongyan Huang, Wan Ding, Youjun Xiong
-
Publication number: 20210200962Abstract: A computer-implemented method for text conversion, a computer device, and a non-transitory computer readable storage medium are provided. The method includes: obtaining a text to be converted; performing a non-standard word recognition on the text to be converted, to determine whether the text to be converted includes a non-standard word; recognizing the non-standard word in the text to be converted by using an eXtreme Gradient Boosting model in response to the text to be converted including the non-standard word; and obtaining a target converted text corresponding to the text to be converted, according to a recognition result outputted by the eXtreme Gradient Boosting model. The method has a faster recognition speed and a higher recognition accuracy compared with the deep learning model.Type: ApplicationFiled: December 24, 2020Publication date: July 1, 2021Inventors: Zhongfa Feng, Dongyan Huang, Youjun Xiong
-
Publication number: 20210201925Abstract: The present disclosure provides a streaming voice conversion method as well as an apparatus and a computer readable storage medium using the same. The method includes: obtaining to-be-converted voice data; partitioning the to-be-converted voice data in an order of data obtaining time as a plurality of to-be-converted partition voices, where the to-be-converted partition voice data carries a partition mark; performing a voice conversion on each of the to-be-converted partition voices to obtain a converted partition voice, where the converted partition voice carries a partition mark; performing a partition restoration on each of the converted partition voices to obtain a restored partition voice, where the restored partition voice carries a partition mark; and outputting each of the restored partition voices according to the partition mark carried by the restored partition voice. In this manner, the response time is shortened, and the conversion speed is improved.Type: ApplicationFiled: December 3, 2020Publication date: July 1, 2021Inventors: Jiebin Xie, Ruotong Wang, Dongyan Huang, Zhichao Tang, Yang Liu, Youjun Xiong
-
Publication number: 20210201890Abstract: The present disclosure discloses a voice conversion training method. The method includes: forming a first training data set including a plurality of training voice data groups; selecting two of the training voice data groups from the first training data set to input into a voice conversion neural network for training; forming a second training data set including the first training data set and a first source speaker voice data group; inputting one of the training voice data groups selected from the first training data set and the first source speaker voice data group into the network for training; forming the third training data set including the second source speaker voice data group and the personalized voice data group that are parallel corpus with respect to each other; and inputting the second source speaker voice data group and the personalized voice data group into the network for training.Type: ApplicationFiled: November 12, 2020Publication date: July 1, 2021Inventors: Ruotong Wang, Dongyan Huang, Xian Li, Jiebin Xie, Zhichao Tang, Wan Ding, Yang Liu, Bai Li, Youjun Xiong
-
Publication number: 20210193113Abstract: The present disclosure provides a speech synthesis method as well as an apparatus and a computer readable storage medium using the same. The method includes: obtaining a to-be-synthesized text, and extracting to-be-processed Mel spectrum features of the to-be-synthesized text through a preset speech feature extraction algorithm; inputting the to-be-processed Mel spectrum features into a preset ResUnet network model to obtain first intermediate features; performing an average pooling and a first down sampling on the to-be-processed Mel spectrum features to obtain second intermediate features; taking the second intermediate features and the first intermediate features output by the ResUnet network model as an input to perform a deconvolution and a first up sampling so as to obtain target Mel spectrum features corresponding to the to-be-processed Mel spectrum features; and converting the target Mel spectrum features into a target speech corresponding to the to-be-synthesized text.Type: ApplicationFiled: December 8, 2020Publication date: June 24, 2021Inventors: Dongyan Huang, Leyuan Sheng, Youjun Xiong
-
Publication number: 20210193160Abstract: The present disclosure discloses a voice conversion method. The method includes: obtaining a to-be-converted voice, and extracting acoustic features of the to-be-converted voice; obtaining a source vector corresponding to the to-be-converted voice from a source vector pool, and selecting a target vector corresponding to the target voice from the target vector pool; obtaining acoustic features of the target voice output by the voice conversion model by using the acoustic features of the to-be-converted voice, the source vector corresponding to the to-be-converted voice, and the target vector corresponding to the target voice as an input of the voice conversion model; and obtaining the target voice by converting the acoustic features of the target voice using a vocoder. In addition, a voice conversion apparatus and a storage medium are also provided.Type: ApplicationFiled: October 30, 2020Publication date: June 24, 2021Inventors: RUOTONG WANG, Zhichao Tang, Dongyan Huang, Jiebin Xie, Zhiyuan Zhao, Yang Liu, Youjun Xiong