Patents Assigned to NEOSAPIENCE, INC.
  • Publication number: 20240105160
    Abstract: A method for generating a synthesis voice is provided, which is performed by one or more processors, and includes acquiring a text-to-speech synthesis model trained to generate a synthesis voice for a training text, based on reference voice data and a training style tag represented by natural language, receiving a target text, acquiring a style tag represented by natural language, and inputting the style tag and the target text into the text-to-speech synthesis model and acquiring a synthesis voice for the target text reflecting voice style features related to the style tag.
    Type: Application
    Filed: December 8, 2023
    Publication date: March 28, 2024
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE, Yookyung SHIN, Hyeongju KIM
  • Patent number: 11929059
    Abstract: The present disclosure relates to a text-to-speech synthesis method using machine learning based on a sequential prosody feature. The text-to-speech synthesis method includes receiving input text, receiving a sequential prosody feature, and generating output speech data for the input text reflecting the received sequential prosody feature by inputting the input text and the received sequential prosody feature to an artificial neural network text-to-speech synthesis model.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: March 12, 2024
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Publication number: 20240013771
    Abstract: A speech translation method using a multilingual text-to-speech synthesis model includes receiving input speech data of the first language and an articulatory feature of a speaker regarding the first language, converting the input speech data of the first language into a text of the first language, converting the text of the first language into a text of the second language, and generating output speech data for the text of the second language that simulates the speaker's speech by inputting the text of the second language and the articulatory feature of the speaker to a single artificial neural network text-to-speech synthesis model.
    Type: Application
    Filed: September 22, 2023
    Publication date: January 11, 2024
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE
  • Patent number: 11810548
    Abstract: A speech translation method using a multilingual text-to-speech synthesis model includes acquiring a single artificial neural network text-to-speech synthesis model having acquired learning based on a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, and a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, receiving input speech data of the first language and an articulatory feature of a speaker regarding the first language, converting the input speech data of the first language into a text of the first language, converting the text of the first language into a text of the second language, and generating output speech data for the text of the second language that simulates the speaker's speech.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: November 7, 2023
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Patent number: 11769483
    Abstract: A multilingual text-to-speech synthesis method and system are disclosed. The method includes receiving an articulatory feature of a speaker regarding a first language, receiving an input text of a second language, and generating output speech data for the input text of the second language that simulates the speaker's speech by inputting the input text of the second language and the articulatory feature of the speaker regarding the first language to a single artificial neural network multilingual text-to-speech synthesis model. The single artificial neural network multilingual text-to-speech synthesis model is generated by learning similarity information between phonemes of the first language and phonemes of the second language based on a first learning data of the first language and a second learning data of the second language.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: September 26, 2023
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Publication number: 20230206896
    Abstract: The present disclosure relates to a method for applying synthesis voice to a speaker image, in which the method includes receiving an input text, inputting the input text to an artificial neural network text-to-speech synthesis model and outputting voice data for the input text, generating a synthesis voice corresponding to the output voice data, and generating information on a plurality of phonemes included in the output voice data, in which the information on the plurality of phonemes may include timing information for each of the plurality of phonemes included in the output voice data.
    Type: Application
    Filed: February 24, 2023
    Publication date: June 29, 2023
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE, Yookyung SHIN
  • Publication number: 20230186895
    Abstract: A method for performing the synthetic speech generation operation on text is provided, including receiving a plurality of sentences, receiving a plurality of speech style characteristics for the plurality of sentences, inputting the plurality of sentences and the plurality of speech style characteristics into an artificial neural network text-to-speech synthesis model, so as to generate a plurality of synthetic speeches for the plurality of sentences that reflect the plurality of speech style characteristics, and receiving a response to at least one of the plurality of synthetic speeches.
    Type: Application
    Filed: February 10, 2023
    Publication date: June 15, 2023
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE, Suhee JO, Yookyung SHIN
  • Patent number: 11664015
    Abstract: A method for searching content having same voice as a voice of a target speaker from among a plurality of contents includes extracting a feature vector corresponding to the voice of the target speaker, selecting any subset of speakers from a training dataset repeatedly by a predetermined number of times, generating linear discriminant analysis (LDA) transformation matrices using each of the selected any subsets of speakers repeatedly by a predetermined number of times, projecting the extracted speaker feature vector to the selected corresponding subsets of speakers using each of the generated LDA transformation matrices, assigning a value corresponding to nearby speaker class among corresponding subsets of speakers, to each of projection regions of the extracted speaker feature vector, generating a hash value corresponding to the extracted feature vector based on the assigned values, and searching content having a similar hash value to the generated hash value among the contents.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: May 30, 2023
    Assignee: NEOSAPIENCE, INC.
    Inventors: Suwon Shon, Younggun Lee, Taesu Kim
  • Publication number: 20230067505
    Abstract: A text-to-speech synthesis method using machine learning, the text-to-speech synthesis method is disclosed. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to-speech synthesis model.
    Type: Application
    Filed: October 19, 2022
    Publication date: March 2, 2023
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE
  • Patent number: 11514887
    Abstract: A text-to-speech synthesis method using machine learning, the text-to-speech synthesis method is disclosed. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to-speech synthesis model.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: November 29, 2022
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Publication number: 20220084500
    Abstract: A multilingual text-to-speech synthesis method and system are disclosed. The method includes receiving an articulatory feature of a speaker regarding a first language, receiving an input text of a second language, and generating output speech data for the input text of the second language that simulates the speaker's speech by inputting the input text of the second language and the articulatory feature of the speaker regarding the first language to a single artificial neural network multilingual text-to-speech synthesis model. The single artificial neural network multilingual text-to-speech synthesis model is generated by learning similarity information between phonemes of the first language and phonemes of the second language based on a first learning data of the first language and a second learning data of the second language.
    Type: Application
    Filed: November 23, 2021
    Publication date: March 17, 2022
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Patent number: 11217224
    Abstract: A multilingual text-to-speech synthesis method and system are disclosed. The method includes receiving first learning data including a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, receiving second learning data including a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, and generating a single artificial neural network text-to-speech synthesis model by learning similarity information between phonemes of the first language and phonemes of the second language based on the first learning data and the second learning data.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: January 4, 2022
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Publication number: 20210280173
    Abstract: A method for searching content having same voice as a voice of a target speaker from among a plurality of contents includes extracting a feature vector corresponding to the voice of the target speaker, selecting any subset of speakers from a training dataset repeatedly by a predetermined number of times, generating linear discriminant analysis (LDA) transformation matrices using each of the selected any subsets of speakers repeatedly by a predetermined number of times, projecting the extracted speaker feature vector to the selected corresponding subsets of speakers using each of the generated LDA transformation matrices, assigning a value corresponding to nearby speaker class among corresponding subsets of speakers, to each of projection regions of the extracted speaker feature vector, generating a hash value corresponding to the extracted feature vector based on the assigned values, and searching content having a similar hash value to the generated hash value among the contents.
    Type: Application
    Filed: May 13, 2021
    Publication date: September 9, 2021
    Applicant: NEOSAPIENCE, INC.
    Inventors: Suwon SHON, Younggun LEE, Taesu KIM
  • Publication number: 20210142783
    Abstract: A method for generating synthetic speech for text through a user interface is provided. The method may include receiving one or more sentences, determining a speech style characteristic for the received one or more sentences, and outputting a synthetic speech for the one or more sentences that reflects the determined speech style characteristic. The one or more sentences and the determined speech style characteristic may be inputted to an artificial neural network text-to-speech synthesis model and the synthetic speech may be generated based on the speech data outputted from the artificial neural network text-to-speech synthesis model.
    Type: Application
    Filed: January 20, 2021
    Publication date: May 13, 2021
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE
  • Publication number: 20200394998
    Abstract: The present disclosure relates to a text-to-speech synthesis method using machine learning based on a sequential prosody feature. The text-to-speech synthesis method includes receiving input text, receiving a sequential prosody feature, and generating output speech data for the input text reflecting the received sequential prosody feature by inputting the input text and the received sequential prosody feature to an artificial neural network text-to-speech synthesis model.
    Type: Application
    Filed: August 27, 2020
    Publication date: December 17, 2020
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE
  • Publication number: 20200342852
    Abstract: A speech translation method using a multilingual text-to-speech synthesis model includes acquiring a single artificial neural network text-to-speech synthesis model having acquired learning based on a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, and a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, receiving input speech data of the first language and an articulatory feature of a speaker regarding the first language, converting the input speech data of the first language into a text of the first language, converting the text of the first language into a text of the second language, and generating output speech data for the text of the second language that simulates the speaker's speech.
    Type: Application
    Filed: July 10, 2020
    Publication date: October 29, 2020
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu KIM, Younggun LEE
  • Publication number: 20200082807
    Abstract: A text-to-speech synthesis method using machine learning, the text-to-speech synthesis method is disclosed. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to-speech synthesis model.
    Type: Application
    Filed: November 13, 2019
    Publication date: March 12, 2020
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Publication number: 20200082806
    Abstract: A multilingual text-to-speech synthesis method and system are disclosed. The method includes receiving first learning data including a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, receiving second learning data including a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, and generating a single artificial neural network text-to-speech synthesis model by learning similarity information between phonemes of the first language and phonemes of the second language based on the first learning data and the second learning data.
    Type: Application
    Filed: November 13, 2019
    Publication date: March 12, 2020
    Applicant: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee