Patents by Inventor Vincent Ping Leung Wan

Vincent Ping Leung Wan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11790274
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to generate embeddings of inputs to the machine learning model, the machine learning model having an encoder that generates the embeddings from the inputs and a decoder that generates outputs from the generated embeddings, wherein the embedding is partitioned into a sequence of embedding partitions that each includes one or more dimensions of the embedding, the operations comprising: for a first embedding partition in the sequence of embedding partitions: performing initial training to train the encoder and a decoder replica corresponding to the first embedding partition; for each particular embedding partition that is after the first embedding partition in the sequence of embedding partitions: performing incremental training to train the encoder and a decoder replica corresponding to the particular partition.
    Type: Grant
    Filed: October 26, 2022
    Date of Patent: October 17, 2023
    Assignee: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Patent number: 11664011
    Abstract: A method of providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word and selecting a mel spectral embedding for the text utterance. Each word has at least one syllable and each syllable has at least one phoneme. For each phoneme, the method further includes using the selected mel spectral embedding to: (i) predict a duration of the corresponding phoneme based on corresponding linguistic features associated with the word that includes the corresponding phoneme and corresponding linguistic features associated with the syllable that includes the corresponding phoneme; and (ii) generate a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame represents mel-spectral information of the corresponding phoneme.
    Type: Grant
    Filed: February 9, 2022
    Date of Patent: May 30, 2023
    Assignee: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Publication number: 20230060886
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to generate embeddings of inputs to the machine learning model, the machine learning model having an encoder that generates the embeddings from the inputs and a decoder that generates outputs from the generated embeddings, wherein the embedding is partitioned into a sequence of embedding partitions that each includes one or more dimensions of the embedding, the operations comprising: for a first embedding partition in the sequence of embedding partitions: performing initial training to train the encoder and a decoder replica corresponding to the first embedding partition; for each particular embedding partition that is after the first embedding partition in the sequence of embedding partitions: performing incremental training to train the encoder and a decoder replica corresponding to the particular partition.
    Type: Application
    Filed: October 26, 2022
    Publication date: March 2, 2023
    Applicant: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Patent number: 11494695
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to generate embeddings of inputs to the machine learning model, the machine learning model having an encoder that generates the embeddings from the inputs and a decoder that generates outputs from the generated embeddings, wherein the embedding is partitioned into a sequence of embedding partitions that each includes one or more dimensions of the embedding, the operations comprising: for a first embedding partition in the sequence of embedding partitions: performing initial training to train the encoder and a decoder replica corresponding to the first embedding partition; for each particular embedding partition that is after the first embedding partition in the sequence of embedding partitions: performing incremental training to train the encoder and a decoder replica corresponding to the particular partition.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: November 8, 2022
    Assignee: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Publication number: 20220172705
    Abstract: A method for providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word, and selecting a mel spectral embedding for the text utterance. Each word in the text utterance has at least one syllable and each syllable has at least one phoneme. For each phoneme, using the selected mel spectral embedding, the method also includes: predicting a duration of the corresponding phoneme by encoding linguistic features of the corresponding phoneme with a corresponding syllable embedding for the syllable that includes the corresponding phoneme; and generating a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame representing mel-spectral information of the corresponding phoneme.
    Type: Application
    Filed: February 9, 2022
    Publication date: June 2, 2022
    Applicant: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Patent number: 11264010
    Abstract: A method for providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word, and selecting a mel spectral embedding for the text utterance. Each word in the text utterance has at least one syllable and each syllable has at least one phoneme. For each phoneme, using the selected mel spectral embedding, the method also includes: predicting a duration of the corresponding phoneme by encoding linguistic features of the corresponding phoneme with a corresponding syllable embedding for the syllable that includes the corresponding phoneme; and generating a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame representing mel-spectral information of the corresponding phoneme.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: March 1, 2022
    Assignee: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Patent number: 11144597
    Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or about
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: October 12, 2021
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Balakrishna Venkata Jagannadha Kolluru, Vincent Ping Leung Wan, Bjorn Dietmar Rafael Stenger, Roberto Cipolla, Javier Latorre-Martinez, Langzhou Chen, Ranniery Da Silva Maia, Kayoko Yanagisawa, Norbert Braunschweiler, Ioannis Stylianou, Robert Arthur Blokland
  • Publication number: 20210097427
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a machine learning model to generate embeddings of inputs to the machine learning model, the machine learning model having an encoder that generates the embeddings from the inputs and a decoder that generates outputs from the generated embeddings, wherein the embedding is partitioned into a sequence of embedding partitions that each includes one or more dimensions of the embedding, the operations comprising: for a first embedding partition in the sequence of embedding partitions: performing initial training to train the encoder and a decoder replica corresponding to the first embedding partition; for each particular embedding partition that is after the first embedding partition in the sequence of embedding partitions: performing incremental training to train the encoder and a decoder replica corresponding to the particular partition.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 1, 2021
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Publication number: 20200074985
    Abstract: A method for providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word, and selecting a mel spectral embedding for the text utterance. Each word in the text utterance has at least one syllable and each syllable has at least one phoneme. For each phoneme, using the selected mel spectral embedding, the method also includes: predicting a duration of the corresponding phoneme by encoding linguistic features of the corresponding phoneme with a corresponding syllable embedding for the syllable that includes the corresponding phoneme; and generating a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame representing mel-spectral information of the corresponding phoneme.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Applicant: Google LLC
    Inventors: Robert Andrew James Clark, Chun-an Chan, Vincent Ping Leung Wan
  • Patent number: 10373604
    Abstract: An acoustic model is adapted, relating acoustic units to speech vectors. The acoustic model comprises a set of acoustic model parameters related to a given speech factor. The acoustic model parameters enable the acoustic model to output speech vectors with different values of the speech factor. The method comprises inputting a sample of speech which is corrupted by noise; determining values of the set of acoustic model parameters which enable the acoustic model to output speech with said first value of the speech factor; and employing said determined values of the set of speech factor parameters in said acoustic model. The acoustic model parameters are obtained by obtaining corrupted speech factor parameters using the sample of speech, and mapping the corrupted speech factor parameters to clean acoustic model parameters using noise characterization paramaters characterizing the noise.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: August 6, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kayoko Yanagisawa
  • Patent number: 10249289
    Abstract: Methods, systems, and computer-readable media for text-to-speech synthesis using an autoencoder. In some implementations, data indicating a text for text-to-speech synthesis is obtained. Data indicating a linguistic unit of the text is provided as input to an encoder. The encoder is configured to output speech unit representations indicative of acoustic characteristics based on linguistic information. A speech unit representation that the encoder outputs is received. A speech unit is selected to represent the linguistic unit, the speech unit being selected from among a collection of speech units based on the speech unit representation output by the encoder. Audio data for a synthesized utterance of the text that includes the selected speech unit is provided.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: April 2, 2019
    Assignee: Google LLC
    Inventors: Byung Ha Chun, Javier Gonzalvo, Chun-an Chan, Ioannis Agiomyrgiannakis, Vincent Ping Leung Wan, Robert Andrew James Clark, Jakub Vit
  • Publication number: 20180268806
    Abstract: Methods, systems, and computer-readable media for text-to-speech synthesis using an autoencoder. In some implementations, data indicating a text for text-to-speech synthesis is obtained. Data indicating a linguistic unit of the text is provided as input to an encoder. The encoder is configured to output speech unit representations indicative of acoustic characteristics based on linguistic information. A speech unit representation that the encoder outputs is received. A speech unit is selected to represent the linguistic unit, the speech unit being selected from among a collection of speech units based on the speech unit representation output by the encoder. Audio data for a synthesized utterance of the text that includes the selected speech unit is provided.
    Type: Application
    Filed: July 13, 2017
    Publication date: September 20, 2018
    Inventors: Byung Ha Chun, Javier Gonzalvo, Chun-an Chan, Ioannis Agiomyrgiannakis, Vincent Ping Leung Wan, Robert Andrew James Clark, Jakub Vit
  • Publication number: 20180203946
    Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or about
    Type: Application
    Filed: March 16, 2018
    Publication date: July 19, 2018
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Balakrishna Venkata Jagannadha KOLLURU, Vincent Ping Leung Wan, Bjorn Dietmar Rafael Stenger, Roberto Cipolla, Javier Latorre-Martinez, Langzhou Chen, Ranniery Da Silva Maia, Kayoko Yanagisawa, Norbert Braunschweiler, Ioannis Stylianou, Robert Arthur Blokland
  • Patent number: 9959657
    Abstract: A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head, said method comprising: providing an input related to the speech which is to be output by the movement of the lips; dividing said input into a sequence of acoustic units; selecting expression characteristics for the inputted text; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector, said image vector comprising a plurality of parameters which define a face of said head; and outputting said sequence of image vectors as video such that the mouth of said head moves to mime the speech associated with the input text with the selected expression, wherein a parameter of a predetermined type of each probability distribution in said selected expression is expressed as a weighted sum of pa
    Type: Grant
    Filed: January 29, 2014
    Date of Patent: May 1, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Bjorn Stenger, Robert Anderson, Roberto Cipolla
  • Patent number: 9959368
    Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or about
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: May 1, 2018
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Balakrishna Venkata Jagannadha Kolluru, Vincent Ping Leung Wan, Bjorn Dietmar Rafael Stenger, Roberto Cipolla, Javier Latorre-Martinez, Langzhou Chen, Ranniery Da Silva Maia, Kayoko Yanagisawa, Norbert Braunschweiler, Ioannis Stylianou, Robert Arthur Blokland
  • Publication number: 20170221479
    Abstract: A method of adapting an acoustic model relating acoustic units to speech vectors, wherein said acoustic model comprises a set of speech factor parameters related to a given speech factor and which enable the acoustic model to output speech vectors with different values of the speech factor, the method comprising: inputting a sample of speech with a first value of the speech factor; determining values of the set of speech factor parameters which enable the acoustic model to output speech with said first value of the speech factor; and employing said determined values of the set of speech factor parameters in said acoustic model, wherein said sample of speech is corrupted by noise, and wherein said step of determining the values of the set of speech factor parameters comprises: (i) obtaining noise characterization parameters characterising the noise; (ii) performing a speech factor parameter generation algorithm on the sample of speech, thereby generating corrupted values of the set of speech factor param
    Type: Application
    Filed: February 2, 2017
    Publication date: August 3, 2017
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Javier LATORRE-MARTINEZ, Vincent Ping Leung Wan, Kayoko Yanagisawa
  • Patent number: 9454963
    Abstract: A text-to-speech method for simulating a plurality of different voice characteristics includes dividing inputted text into a sequence of acoustic units; selecting voice characteristics for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model having a plurality of model parameters provided in clusters each having at least one sub-cluster and describing probability distributions which relate an acoustic unit to a speech vector; and outputting the sequence of speech vectors as audio with the selected voice characteristics. A parameter of a predetermined type of each probability distribution is expressed as a weighted sum of parameters of the same type using voice characteristic dependent weighting. In converting the sequence of acoustic units to a sequence of speech vectors, the voice characteristic dependent weights for the selected voice characteristics are retrieved for each cluster such that there is one weight per sub-cluster.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: September 27, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine, Byung Ha Chung
  • Patent number: 9361722
    Abstract: A method of animating a computer generation of a head and displaying the text of an electronic book, such that the head has a mouth which moves in accordance with the speech of the text of the electronic book to be output by the head and a word or group of words from the text is displayed while simultaneously being mimed by the mouth, wherein input text is divided into a sequence of acoustic units, which are converted to a sequence of image vectors and into a sequence of text display indicators. The sequence of image vectors is outputted as video such that the mouth of said head moves to mime the speech associated with the input text with a selected expression, and the sequence of text display indicators is output as video which is synchronized with the lip movement of the head.
    Type: Grant
    Filed: August 8, 2014
    Date of Patent: June 7, 2016
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Balakrishna Venkata Jagannadha Kolluru, Ioannis Stylianou, Robert Arthur Blokland, Norbert Braunschweiler, Kayoko Yanagisawa, Langzhou Chen, Ranniery Maia, Robert Anderson, Bjorn Stenger, Roberto Cipolla, Neil Baker
  • Patent number: 9269347
    Abstract: A text-to-speech method configured to output speech having a selected speaker voice and a selected speaker attribute, including: inputting text; dividing the inputted text into a sequence of acoustic units; selecting a speaker for the inputted text; selecting a speaker attribute for the inputted text; converting the sequence of acoustic units to a sequence of speech vectors using an acoustic model; and outputting the sequence of speech vectors as audio with the selected speaker voice and a selected speaker attribute. The acoustic model includes a first set of parameters relating to speaker voice and a second set of parameters relating to speaker attributes, which parameters do not overlap. The selecting a speaker voice includes selecting parameters from the first set of parameters and the selecting the speaker attribute includes selecting the parameters from the second set of parameters.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: February 23, 2016
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kean Kheong Chin, Mark John Francis Gales, Katherine Mary Knill, Masami Akamine
  • Publication number: 20150052084
    Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or about
    Type: Application
    Filed: August 13, 2014
    Publication date: February 19, 2015
    Applicant: Kabushiki Kaisha Toshiba
    Inventors: Balakrishna Venkata Jagannadha KOLLURU, Vincent Ping Leung WAN, Bjorn Dietmar Rafael STENGER, Roberto CIPOLLA, Javier LATORRE-MARTINEZ, Langzhou CHEN, Ranniery Da Silva MAIA, Kayoko YANAGISAWA, Norbert BRAUNSCHWEILER, Ioannis STYLIANOU, Robert Arthur BLOKLAND