Patents by Inventor Mu-Yeol CHOI

Mu-Yeol CHOI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11551012
    Abstract: Provided are an apparatus and method for providing a personal assistant service based on automatic translation. The apparatus for providing a personal assistant service based on automatic translation includes an input section configured to receive a command of a user, a memory in which a program for providing a personal assistant service according to the command of the user is stored, and a processor configured to execute the program. The processor updates at least one of a speech recognition model, an automatic interpretation model, and an automatic translation model on the basis of an intention of the command of the user using a recognition result of the command of the user and provides the personal assistant service on the basis of an automatic translation call.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 10, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Yun, Sang Hun Kim, Min Kyu Lee, Yun Keun Lee, Mu Yeol Choi, Yeo Jeong Kim, Sang Kyu Park
  • Publication number: 20210004542
    Abstract: Provided are an apparatus and method for providing a personal assistant service based on automatic translation. The apparatus for providing a personal assistant service based on automatic translation includes an input section configured to receive a command of a user, a memory in which a program for providing a personal assistant service according to the command of the user is stored, and a processor configured to execute the program. The processor updates at least one of a speech recognition model, an automatic interpretation model, and an automatic translation model on the basis of an intention of the command of the user using a recognition result of the command of the user and provides the personal assistant service on the basis of an automatic translation call.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 7, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung YUN, Sang Hun KIM, Min Kyu LEE, Yun Keun LEE, Mu Yeol CHOI, Yeo Jeong KIM, Sang Kyu PARK
  • Patent number: 10558763
    Abstract: An automatic translation device includes a communications module transmitting and receiving data to and from an ear-set device including a speaker, a first microphone, and a second microphone, a memory storing a program generating a result of translation using a dual-channel audio signal, and a processor executing the program stored in the memory. When the program is executed, the processor compares a first audio signal including a voice signal of a user, received using the first microphone, with a second audio signal including a noise signal and the voice signal of the user, received using the second microphone, and entirely or selectively extracting the voice signal of the user from the first and second audio signals, based on a result of the comparison, to perform automatic translation.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: February 11, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Mu Yeol Choi, Min Kyu Lee, Sang Hun Kim, Seung Yun
  • Patent number: 10489515
    Abstract: Provided is a method of providing an automatic speech translation service. The method includes, by an automatic speech translation device of a user, searching for and finding a nearby automatic speech translation device based on strength of a signal for wireless communication, exchanging information for automatic speech translation with the found automatic speech translation device, generating a list of candidate devices for the automatic speech translation using the automatic speech translation information and the signal strength, and connecting to a candidate device having a greatest variation of the signal strength among devices in the generated list.
    Type: Grant
    Filed: May 5, 2016
    Date of Patent: November 26, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Mu Yeol Choi, Sang Hun Kim, Young Jik Lee, Jun Park, Seung Yun
  • Patent number: 10298736
    Abstract: A voice signal processing apparatus includes: an input unit which receives a voice signal of a user; a detecting unit which detects an auxiliary signal; and a signal processing unit which transmits the voice signal to an external terminal in a first operation mode and transmits the voice signal and the auxiliary signal to the external terminal using the same or different protocols in a second operation mode.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: May 21, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Min Kyu Lee, Sang Hun Kim, Young Ik Kim, Dong Hyun Kim, Mu Yeol Choi
  • Patent number: 10249294
    Abstract: A speech recognition method capable of automatic generation of phones according to the present invention includes: unsupervisedly learning a feature vector of speech data; generating a phone set by clustering acoustic features selected based on an unsupervised learning result; allocating a sequence of phones to the speech data on the basis of the generated phone set; and generating an acoustic model on the basis of the sequence of phones and the speech data to which the sequence of phones is allocated.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: April 2, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Dong Hyun Kim, Young Jik Lee, Sang Hun Kim, Seung Hi Kim, Min Kyu Lee, Mu Yeol Choi
  • Patent number: 10216729
    Abstract: A user terminal, hands-free device and method for hands-free automatic interpretation service. The user terminal includes an interpretation environment initialization unit, an interpretation intermediation unit, and an interpretation processing unit. The interpretation environment initialization unit performs pairing with a hands-free device in response to a request from the hands-free device, and initializes an interpretation environment. The interpretation intermediation unit sends interpretation results obtained by interpreting a user's voice information received from the hands-free device to a counterpart terminal, and receives interpretation results obtained by interpreting a counterpart's voice information from the counterpart terminal.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: February 26, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sang-Hun Kim, Ki-Hyun Kim, Ji-Hyun Wang, Dong-Hyun Kim, Seung Yun, Min-Kyu Lee, Dam-Heo Lee, Mu-Yeol Choi
  • Publication number: 20190042565
    Abstract: An automatic translation device includes a communications module transmitting and receiving data to and from an ear-set device including a speaker, a first microphone, and a second microphone, a memory storing a program generating a result of translation using a dual-channel audio signal, and a processor executing the program stored in the memory. When the program is executed, the processor compares a first audio signal including a voice signal of a user, received using the first microphone, with a second audio signal including a noise signal and the voice signal of the user, received using the second microphone, and entirely or selectively extracting the voice signal of the user from the first and second audio signals, based on a result of the comparison, to perform automatic translation.
    Type: Application
    Filed: June 21, 2018
    Publication date: February 7, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Mu Yeol CHOI, Min Kyu LEE, Sang Hun KIM, Seung YUN
  • Patent number: 10108606
    Abstract: Provided are an automatic interpretation system and method for generating a synthetic sound having characteristics similar to those of an original speaker's voice. The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker's voice includes a speech recognition module configured to generate text data by performing speech recognition for an original speech signal of an original speaker and extract at least one piece of characteristic information among pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech, an automatic translation module configured to generate a synthesis-target translation by translating the text data, and a speech synthesis module configured to generate a synthetic sound of the synthesis-target translation.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: October 23, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Yun, Ki Hyun Kim, Sang Hun Kim, Yun Young Kim, Jeong Se Kim, Min Kyu Lee, Soo Jong Lee, Young Jik Lee, Mu Yeol Choi
  • Publication number: 20180075844
    Abstract: A speech recognition method capable of automatic generation of phones according to the present invention includes: unsupervisedly learning a feature vector of speech data; generating a phone set by clustering acoustic features selected based on an unsupervised learning result; allocating a sequence of phones to the speech data on the basis of the generated phone set; and generating an acoustic model on the basis of the sequence of phones and the speech data to which the sequence of phones is allocated.
    Type: Application
    Filed: July 11, 2017
    Publication date: March 15, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Dong Hyun KIM, Young Jik Lee, Sang Hun Kim, Seung Hi Kim, Min Kyu Lee, Mu Yeol Choi
  • Publication number: 20170255616
    Abstract: Provided are an automatic interpretation system and method for generating a synthetic sound having characteristics similar to those of an original speaker's voice. The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker's voice includes a speech recognition module configured to generate text data by performing speech recognition for an original speech signal of an original speaker and extract at least one piece of characteristic information among pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech, an automatic translation module configured to generate a synthesis-target translation by translating the text data, and a speech synthesis module configured to generate a synthetic sound of the synthesis-target translation.
    Type: Application
    Filed: July 19, 2016
    Publication date: September 7, 2017
    Inventors: Seung YUN, Ki Hyun KIM, Sang Hun KIM, Yun Young KIM, Jeong Se KIM, Min Kyu LEE, Soo Jong LEE, Young Jik LEE, Mu Yeol CHOI
  • Publication number: 20170013105
    Abstract: A voice signal processing apparatus includes: an input unit which receives a voice signal of a user; a detecting unit which detects an auxiliary signal, and a signal processing unit which transmits the voice signal to an external terminal in a first operation mode and transmits the voice signal and the auxiliary signal to the external terminal using the same or different protocols in a second operation mode.
    Type: Application
    Filed: July 6, 2016
    Publication date: January 12, 2017
    Inventors: Min Kyu LEE, Sang Hun KIM, Young Ik KIM, Dong Hyun KIM, Mu Yeol CHOI
  • Publication number: 20160328391
    Abstract: Provided is a method of providing an automatic speech translation service. The method includes, by an automatic speech translation device of a user, searching for and finding a nearby automatic speech translation device based on strength of a signal for wireless communication, exchanging information for automatic speech translation with the found automatic speech translation device, generating a list of candidate devices for the automatic speech translation using the automatic speech translation information and the signal strength, and connecting to a candidate device having a greatest variation of the signal strength among devices in the generated list.
    Type: Application
    Filed: May 5, 2016
    Publication date: November 10, 2016
    Inventors: Mu Yeol CHOI, Sang Hun KIM, Young Jik LEE, Jun PARK, Seung YUN
  • Publication number: 20160260426
    Abstract: A speech recognition apparatus and method are provided, the method including converting an input signal to acoustic model data, dividing the acoustic model data into a speech model group and a non-speech model group and calculating a first maximum likelihood corresponding to the speech model group and a second maximum likelihood corresponding to the non-speech model group, detecting a speech based on a likelihood ratio (LR) between the first maximum likelihood and the second maximum likelihood, obtaining utterance stop information based on output data of a decoder and dividing the input signal into a plurality of speech intervals based on the utterance stop information, calculating a confidence score of each of the plurality of speech intervals based on information on a prior probability distribution of the acoustic model data, and removing a speech interval having the confidence score lower than a threshold.
    Type: Application
    Filed: March 2, 2016
    Publication date: September 8, 2016
    Inventors: Young Ik KIM, Sang Hun KIM, Min Kyu LEE, Mu Yeol CHOI
  • Publication number: 20160210283
    Abstract: A user terminal, hands-free device and method for hands-free automatic interpretation service. The user terminal includes an interpretation environment initialization unit, an interpretation intermediation unit, and an interpretation processing unit. The interpretation environment initialization unit performs pairing with a hands-free device in response to a request from the hands-free device, and initializes an interpretation environment. The interpretation intermediation unit sends interpretation results obtained by interpreting a user's voice information received from the hands-free device to a counterpart terminal, and receives interpretation results obtained by interpreting a counterpart's voice information from the counterpart terminal.
    Type: Application
    Filed: April 30, 2014
    Publication date: July 21, 2016
    Inventors: Sang-Hun KIM, Ki-Hyun KIM, Ji-Hyun WANG, Dong-Hyun KIM, Seung YUN, Min-Kyu LEE, Dam-Heo LEE, Mu-Yeol CHOI
  • Publication number: 20150169551
    Abstract: An apparatus and method for automatic translation are disclosed. In the apparatus for automatic translation, a User Interface (UI) generation unit generates UIs necessary for start of translation and a translation process. A translation target input unit receives a translation target to be translated from a user. A translation target translation unit translates the translation target received by the translation target input unit and generates results of translation. A display unit includes a touch panel for outputting the results of translation and the UIs in accordance with the location of the user.
    Type: Application
    Filed: October 23, 2014
    Publication date: June 18, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung YUN, Sang-Hun KIM, Mu-Yeol CHOI