Patents by Inventor Yonghui Wu
Yonghui Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11580952Abstract: A method includes receiving an input text sequence to be synthesized into speech in a first language and obtaining a speaker embedding, the speaker embedding specifying specific voice characteristics of a target speaker for synthesizing the input text sequence into speech that clones a voice of the target speaker. The target speaker includes a native speaker of a second language different than the first language. The method also includes generating, using a text-to-speech (TTS) model, an output audio feature representation of the input text by processing the input text sequence and the speaker embedding. The output audio feature representation includes the voice characteristics of the target speaker specified by the speaker embedding.Type: GrantFiled: April 22, 2020Date of Patent: February 14, 2023Assignee: Google LLCInventors: Yu Zhang, Ron J. Weiss, Byungha Chun, Yonghui Wu, Zhifeng Chen, Russell John Wyatt Skerry-Ryan, Ye Jia, Andrew M. Rosenberg, Bhuvana Ramabhadran
-
Patent number: 11556381Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing machine learning workloads, e.g., computations for training a neural network or computing an inference using a neural network, across multiple hardware accelerators.Type: GrantFiled: May 6, 2022Date of Patent: January 17, 2023Assignee: Google LLCInventors: Jeffrey Adgate Dean, Sudip Roy, Michael Acheson Isard, Aakanksha Chowdhery, Brennan Saeta, Chandramohan Amyangot Thekkath, Daniel William Hurt, Hyeontaek Lim, Laurent El Shafey, Parker Edward Schuh, Paul Ronald Barham, Ruoming Pang, Ryan Sepassi, Sanjay Ghemawat, Yonghui Wu
-
Publication number: 20220357985Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing machine learning workloads, e.g., computations for training a neural network or computing an inference using a neural network, across multiple hardware accelerators.Type: ApplicationFiled: May 6, 2022Publication date: November 10, 2022Inventors: Jeffrey Adgate Dean, Sudip Roy, Michael Acheson Isard, Aakanksha Chowdhery, Brennan Saeta, Chandramohan Amyangot Thekkath, Daniel William Hurt, Hyeontaek Lim, Laurent El Shafey, Parker Edward Schuh, Paul Ronald Barham, Ruoming Pang, Ryan Sepassi, Sanjay Ghemawat, Yonghui Wu
-
Publication number: 20220351713Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.Type: ApplicationFiled: July 19, 2022Publication date: November 3, 2022Applicant: Google LLCInventors: Ye Jia, Zhifeng Chen, Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Ignacio Lopez Moreno, Fei Ren, Yu Zhang, Quan Wang, Patrick An Phu Nguyen
-
Patent number: 11488575Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech synthesis. The methods, systems, and apparatus include actions of obtaining an audio representation of speech of a target speaker, obtaining input text for which speech is to be synthesized in a voice of the target speaker, generating a speaker vector by providing the audio representation to a speaker encoder engine that is trained to distinguish speakers from one another, generating an audio representation of the input text spoken in the voice of the target speaker by providing the input text and the speaker vector to a spectrogram generation engine that is trained using voices of reference speakers to generate audio representations, and providing the audio representation of the input text spoken in the voice of the target speaker for output.Type: GrantFiled: May 17, 2019Date of Patent: November 1, 2022Assignee: Google LLCInventors: Ye Jia, Zhifeng Chen, Yonghui Wu, Jonathan Shen, Ruoming Pang, Ron J. Weiss, Ignacio Lopez Moreno, Fei Ren, Yu Zhang, Quan Wang, Patrick Nguyen
-
Patent number: 11475706Abstract: Provided are a fingerprint identification device, a fingerprint identification method and an electronic device, which could improve security of fingerprint identification. The fingerprint identification device includes an optical fingerprint sensor including a plurality of pixel units; at least two filter units disposed above at least two of the plurality of pixel units, where each filter unit corresponds to one pixel unit, and the at least two filter units comprise filter units in at least two colors.Type: GrantFiled: May 21, 2021Date of Patent: October 18, 2022Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.Inventors: Shunzhan Li, Xiang Cheng, Qin Gu, Yonghui Wu
-
Patent number: 11475874Abstract: A method of generating diverse and natural text-to-speech (TTS) samples includes receiving a text and generating a speech sample based on the text using a TTS model. A training process trains the TTS model to generate the speech sample by receiving training samples. Each training sample includes a spectrogram and a training text corresponding to the spectrogram. For each training sample, the training process identifies speech units associated with the training text. For each speech unit, the training process generates a speech embedding, aligns the speech embedding with a portion of the spectrogram, extracts a latent feature from the aligned portion of the spectrogram, and assigns a quantized embedding to the latent feature. The training process generates the speech sample by decoding a concatenation of the speech embeddings and a quantized embeddings for the speech units associated with the training text corresponding to the spectrogram.Type: GrantFiled: January 29, 2021Date of Patent: October 18, 2022Assignee: Google LLCInventors: Yu Zhang, Bhuvana Ramabhadran, Andrew Rosenberg, Yonghui Wu, Byungha Chun, Ron Weiss, Yuan Cao
-
Publication number: 20220329600Abstract: A network device for providing a LAN GUI to a client device. The network device receives a request for access by the client device to the LAN GUI. The network device analyzes a LAN GUI access whitelist and determines whether the client device is in the LAN GUI access whitelist. The client device is granted access to the LAN GUI without receiving a password from the client device when the client device is determined to be in the LAN GUI access whitelist. An address entry page may be presented to add the MAC address of the client device to the LAN GUI access whitelist and a password page may be presented to display the LAN GUI password. When the client device is not in the LAN GUI access list, a login page is presented for entering the password to obtain access to the LAN GUI.Type: ApplicationFiled: July 21, 2020Publication date: October 13, 2022Inventor: Yonghui WU
-
Patent number: 11468244Abstract: A method of transcribing speech using a multilingual end-to-end (E2E) speech recognition model includes receiving audio data for an utterance spoken in a particular native language, obtaining a language vector identifying the particular language, and processing, using the multilingual E2E speech recognition model, the language vector and acoustic features derived from the audio data to generate a transcription for the utterance. The multilingual E2E speech recognition model includes a plurality of language-specific adaptor modules that include one or more adaptor modules specific to the particular native language and one or more other adaptor modules specific to at least one other native language different than the particular native language. The method also includes providing the transcription for output.Type: GrantFiled: March 30, 2020Date of Patent: October 11, 2022Assignee: Google LLCInventors: Anjuli Patricia Kannan, Tara N. Sainath, Yonghui Wu, Ankur Bapna, Arindrima Datta
-
Publication number: 20220310059Abstract: A method includes receiving a text input including a sequence of words represented as an input encoder embedding. The input encoder embedding includes a plurality of tokens, with the plurality of tokens including a first set of grapheme tokens representing the text input as respective graphemes and a second set of phoneme tokens representing the text input as respective phonemes. The method also includes, for each respective phoneme token of the second set of phoneme tokens: identifying a respective word of the sequence of words corresponding to the respective phoneme token and determining a respective grapheme token representing the respective word of the sequence of words corresponding to the respective phoneme token. The method also includes generating an output encoder embedding based on a relationship between each respective phoneme token and the corresponding grapheme token determined to represent a same respective word as the respective phoneme token.Type: ApplicationFiled: December 10, 2021Publication date: September 29, 2022Applicant: Google LLCInventors: Ye Jia, Byungha Chun, Yu Zhang, Jonathan Shen, Yonghui Wu
-
Publication number: 20220310072Abstract: Two-pass automatic speech recognition (ASR) models can be used to perform streaming on-device ASR to generate a text representation of an utterance captured in audio data. Various implementations include a first-pass portion of the ASR model used to generate streaming candidate recognition(s) of an utterance captured in audio data. For example, the first-pass portion can include a recurrent neural network transformer (RNN-T) decoder. Various implementations include a second-pass portion of the ASR model used to revise the streaming candidate recognition(s) of the utterance and generate a text representation of the utterance. For example, the second-pass portion can include a listen attend spell (LAS) decoder. Various implementations include a shared encoder shared between the RNN-T decoder and the LAS decoder.Type: ApplicationFiled: June 3, 2020Publication date: September 29, 2022Inventors: Tara N. Sainath, Ruoming Pang, David Rybach, Yanzhang He, Rohit Prabhavalkar, Wei Li, Mirkó Visontai, Qiao Liang, Trevor Strohman, Yonghui Wu, Ian C. McGraw, Chung-Cheng Chiu
-
Publication number: 20220301543Abstract: A method for training a non-autoregressive TTS model includes obtaining a sequence representation of an encoded text sequence concatenated with a variational embedding. The method also includes using a duration model network to predict a phoneme duration for each phoneme represented by the encoded text sequence. Based on the predicted phoneme durations, the method also includes learning an interval representation and an auxiliary attention context representation. The method also includes upsampling, using the interval representation and the auxiliary attention context representation, the sequence representation into an upsampled output specifying a number of frames. The method also includes generating, based on the upsampled output, one or more predicted mel-frequency spectrogram sequences for the encoded text sequence.Type: ApplicationFiled: May 21, 2021Publication date: September 22, 2022Applicant: Google LLCInventors: Isaac Elias, Byungha Chun, Jonathan Shen, Ye Jia, Yu Zhang, Yonghui Wu
-
Publication number: 20220292141Abstract: A quick application startup method and a related apparatus are provided. The method includes: An electronic device requests an acceleration script of one or more quick applications from an application server. A first operation for a target quick application is detected. In response to the first operation, the electronic device requests an application package of the target quick application from the application server. An acceleration script of the target quick application is included in the acceleration script of the one or more quick applications. In response to the first operation, the electronic device runs the acceleration script of the target quick application to obtain a first URL, and obtains first data based on the first URL. The electronic device may generate and display a first screen of the target quick application based on the first data.Type: ApplicationFiled: August 29, 2020Publication date: September 15, 2022Inventors: Litao Yu, Yonghui Wu, Fei Sun, Guoqiang Li
-
Publication number: 20220254065Abstract: a camera calibration method, apparatus and an electronic device. The method includes: obtaining a calibration board image, where the calibration board image includes a plurality of annular patterns; obtaining an inner edge and an outer edge of each annular pattern in the calibration board image; determining image coordinates of a center point of each annular pattern according to the inner edge and the outer edge of each annular pattern; and determining internal and external parameters of a camera according to the image coordinates and corresponding world coordinates of the center point of each annular pattern. The accuracy of camera calibration is improved.Type: ApplicationFiled: January 31, 2022Publication date: August 11, 2022Applicant: Shenzhen Goodix Technology Co., Ltd.Inventors: Yonghui WU, Jinzhu YAO
-
Publication number: 20220246132Abstract: A method of generating diverse and natural text-to-speech (TTS) samples includes receiving a text and generating a speech sample based on the text using a TTS model. A training process trains the TTS model to generate the speech sample by receiving training samples. Each training sample includes a spectrogram and a training text corresponding to the spectrogram. For each training sample, the training process identifies speech units associated with the training text. For each speech unit, the training process generates a speech embedding, aligns the speech embedding with a portion of the spectrogram, extracts a latent feature from the aligned portion of the spectrogram, and assigns a quantized embedding to the latent feature. The training process generates the speech sample by decoding a concatenation of the speech embeddings and a quantized embeddings for the speech units associated with the training text corresponding to the spectrogram.Type: ApplicationFiled: January 29, 2021Publication date: August 4, 2022Applicant: Google LLCInventors: Yu Zhang, Bhuvana Ramabhadran, Andrew Rosenberg, Yonghui Wu, Byungha Chun, Ron Weiss, Yuan Cao
-
Publication number: 20220207321Abstract: Systems and methods can utilize a conformer model to process a data set for various data processing tasks, including, but not limited to, speech recognition, sound separation, protein synthesis determination, video or other image set analysis, and natural language processing. The conformer model can use feed-forward blocks, a self-attention block, and a convolution block to process data to learn global interactions and relative-offset-based local correlations of the input data.Type: ApplicationFiled: December 31, 2020Publication date: June 30, 2022Inventors: Anmol Gulati, Ruoming Pang, Niki Parmar, Jiahui Yu, Wei Han, Chung-Cheng Chiu, Yu Zhang, Yonghui Wu, Shibo Wang, Weikeng Qin, Zhengdong Zhang
-
Patent number: 11335321Abstract: A method of building a text-to-speech (TTS) system from a small amount of speech data includes receiving a first plurality of recorded speech samples from an assortment of speakers and a second plurality of recorded speech samples from a target speaker where the assortment of speakers does not include the target speaker. The method further includes training a TTS model using the first plurality of recorded speech samples from the assortment of speakers. Here, the trained TTS model is configured to output synthetic speech as an audible representation of a text input. The method also includes re-training the trained TTS model using the second plurality of recorded speech samples from the target speaker combined with the first plurality of recorded speech samples from the assortment of speakers. Here, the re-trained TTS model is configured to output synthetic speech resembling speaking characteristics of the target speaker.Type: GrantFiled: August 28, 2020Date of Patent: May 17, 2022Assignee: Google LLCInventors: Ye Jia, Byungha Chun, Yusuke Oda, Norman Casagrande, Tejas Iyer, Fan Luo, Russell John Wyatt Skerry-Ryan, Jonathan Shen, Yonghui Wu, Yu Zhang
-
Patent number: 11335333Abstract: A method includes obtaining audio data for a long-form utterance and segmenting the audio data for the long-form utterance into a plurality of overlapping segments. The method also includes, for each overlapping segment of the plurality of overlapping segments: providing features indicative of acoustic characteristics of the long-form utterance represented by the corresponding overlapping segment as input to an encoder neural network; processing an output of the encoder neural network using an attender neural network to generate a context vector; and generating word elements using the context vector and a decoder neural network. The method also includes generating a transcription for the long-form utterance by merging the word elements from the plurality of overlapping segments and providing the transcription as an output of the automated speech recognition system.Type: GrantFiled: December 17, 2019Date of Patent: May 17, 2022Assignee: Google LLCInventors: Wei Han, Chung-Cheng Chiu, Yu Zhang, Yonghui Wu, Patrick Nguyen, Sergey Kishchenko
-
Publication number: 20220130374Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer-readable media, for speech recognition using multi-dialect and multilingual models. In some implementations, audio data indicating audio characteristics of an utterance is received. Input features determined based on the audio data are provided to a speech recognition model that has been trained to output score indicating the likelihood of linguistic units for each of multiple different language or dialects. The speech recognition model can be one that has been trained using cluster adaptive training. Output that the speech recognition model generated in response to receiving the input features determined based on the audio data is received. A transcription of the utterance generated based on the output of the speech recognition model is provided.Type: ApplicationFiled: January 10, 2022Publication date: April 28, 2022Applicant: Google LLCInventors: Zhifeng Chen, Bo Li, Eugene Weinstein, Yonghui Wu, Pedro J. Moreno Mengibar, Ron J. Weiss, Khe Chai Sim, Tara N. Sainath, Patrick An Phu Nguyen
-
Publication number: 20220122586Abstract: A computer-implemented method of training a streaming speech recognition model that includes receiving, as input to the streaming speech recognition model, a sequence of acoustic frames. The streaming speech recognition model is configured to learn an alignment probability between the sequence of acoustic frames and an output sequence of vocabulary tokens. The vocabulary tokens include a plurality of label tokens and a blank token. At each output step, the method includes determining a first probability of emitting one of the label tokens and determining a second probability of emitting the blank token. The method also includes generating the alignment probability at a sequence level based on the first probability and the second probability. The method also includes applying a tuning parameter to the alignment probability at the sequence level to maximize the first probability of emitting one of the label tokens.Type: ApplicationFiled: September 9, 2021Publication date: April 21, 2022Applicant: Google LLCInventors: Jiahui Yu, Chung-cheng Chiu, Bo Li, Shuo-yiin Chang, Tara Sainath, Wei Han, Anmol Gulati, Yanzhang He, Arun Narayanan, Yonghui Wu, Ruoming Pang