Patents by Inventor Yun Keun Lee

Yun Keun Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180157640
    Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
    Type: Application
    Filed: February 22, 2017
    Publication date: June 7, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui Sok CHUNG, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
  • Patent number: 9959862
    Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: May 1, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Byung Ok Kang, Jeon Gue Park, Hwa Jeon Song, Yun Keun Lee, Eui Sok Chung
  • Publication number: 20180075023
    Abstract: The present invention relates to a device of simultaneous interpretation based on real-time extraction of an interpretation unit, the device including a voice recognition module configured to recognize voice units as sentence units or translation units from vocalized speech that is input in real time, a real-time interpretation unit extraction module configured to form one or more of the voice units into an interpretation unit, and a real-time interpretation module configured to perform an interpretation task for each interpretation unit formed by the real-time interpretation unit extraction module.
    Type: Application
    Filed: September 11, 2017
    Publication date: March 15, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Chang Hyun KIM, Young Kil KIM, Yun Keun LEE
  • Publication number: 20180047389
    Abstract: Provided are an apparatus and method for recognizing speech using an attention-based content-dependent (CD) acoustic model. The apparatus includes a predictive deep neural network (DNN) configured to receive input data from an input layer and output predictive values to a buffer of a first output layer, and a context DNN configured to receive a context window from the first output layer and output a final result value.
    Type: Application
    Filed: January 12, 2017
    Publication date: February 15, 2018
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hwa Jeon SONG, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG
  • Patent number: 9805716
    Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: October 31, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sung Joo Lee, Byung Ok Kang, Jeon Gue Park, Yun Keun Lee, Hoon Chung
  • Patent number: 9799331
    Abstract: A feature compensation apparatus includes a feature extractor configured to extract corrupt speech features from a corrupt speech signal with additive noise that consists of two or more frames; a noise estimator configured to estimate noise features based on the extracted corrupt speech features and compensated speech features; a probability calculator configured to calculate a correlation between adjacent frames of the corrupt speech signal; and a speech feature compensator configured to generate compensated speech features by eliminating noise features of the extracted corrupt speech features while taking into consideration the correlation between adjacent frames of the corrupt speech signal and the estimated noise features, and to transmit the generated compensated speech features to the noise estimator.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: October 24, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee
  • Publication number: 20170213545
    Abstract: An incremental self-learning based dialogue apparatus for dialogue knowledge includes a dialogue processing unit configured to determine a intention of a user utterance by using a knowledge base and perform processing or a response suitable for the user intention, a dialogue establishment unit configured to automatically learn a user intention stored in a intention annotated learning corpus, store information about the learned user intention in the knowledge base, and edit and manage the knowledge base and the intention annotated learning corpus, and a self-knowledge augmentation unit configured to store a log of a dialogue performed by the dialogue processing unit, detect and classify an error in the stored dialogue log, automatically tag a user intention for the detected and classified error, and store the tagged user intention in the intention annotated learning corpus.
    Type: Application
    Filed: January 13, 2017
    Publication date: July 27, 2017
    Inventors: Oh Woog KWON, Young Kil KIM, Yun Keun LEE
  • Publication number: 20170206894
    Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
    Type: Application
    Filed: June 20, 2016
    Publication date: July 20, 2017
    Inventors: Byung Ok KANG, Jeon Gue PARK, Hwa Jeon SONG, Yun Keun LEE, Eui Sok CHUNG
  • Publication number: 20160275964
    Abstract: A feature compensation apparatus includes a feature extractor configured to extract corrupt speech features from a corrupt speech signal with additive noise that consists of two or more frames; a noise estimator configured to estimate noise features based on the extracted corrupt speech features and compensated speech features; a probability calculator configured to calculate a correlation between adjacent frames of the corrupt speech signal; and a speech feature compensator configured to generate compensated speech features by eliminating noise features of the extracted corrupt speech features while taking into consideration the correlation between adjacent frames of the corrupt speech signal and the estimated noise features, and to transmit the generated compensated speech features to the noise estimator.
    Type: Application
    Filed: March 18, 2016
    Publication date: September 22, 2016
    Inventors: Hyun Woo KIM, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
  • Publication number: 20160240190
    Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.
    Type: Application
    Filed: February 12, 2016
    Publication date: August 18, 2016
    Inventors: Sung Joo LEE, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hoon CHUNG
  • Patent number: 9396722
    Abstract: Disclosed are an apparatus and a method for detecting a speech endpoint using a WFST. The apparatus in accordance with an embodiment of the present invention includes: a speech decision portion configured to receive frame units of feature vector converted from a speech signal and to analyze and classify the received feature vector into a speech class or a noise class; a frame level WFST configured to receive the speech class and the noise class and to convert the speech class and the noise class to a WFST format; a speech level WFST configured to detect a speech endpoint by analyzing a relationship between the speech class and noise class and a preset state; a WFST combination portion configured to combine the frame level WFST with the speech level WFST; and an optimization portion configured to optimize the combined WFST having the frame level WFST and the speech level WFST combined therein to have a minimum route.
    Type: Grant
    Filed: March 25, 2014
    Date of Patent: July 19, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon Chung, Sung-Joo Lee, Yun-Keun Lee
  • Patent number: 9390426
    Abstract: Disclosed are a personalized advertisement device based on speech recognition SMS services and a personalized advertisement exposure method based on speech recognition SMS services. The present invention provides a personalized advertisement device based on speech recognition SMS services and a personalized advertisement exposure method based on speech recognition SMS services capable of maximizing an effect of advertisement by grasping user's intention, an emotion state, and positional information from speech data uttered by a user during a process of providing speech recognition SMS services, configuring advertisements from when speech data begins conversion to when it has been completely converted by the speech recognition into character strings, and exposing the configured advertisements to a user.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: July 12, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon Chung, Jeon Gue Park, Hyung Bae Jeon, Ki Young Park, Yun Keun Lee, Sang Kyu Park
  • Publication number: 20160078863
    Abstract: Provided are a signal processing algorithm-integrated deep neural network (DNN)-based speech recognition apparatus and a learning method thereof. A model parameter learning method in a deep neural network (DNN)-based speech recognition apparatus implementable by a computer includes converting a signal processing algorithm for extracting a feature parameter from a speech input signal of a time domain into signal processing deep neural network (DNN), fusing the signal processing DNN and a classification DNN, and learning a model parameter in a deep learning model in which the signal processing DNN and the classification DNN are fused.
    Type: Application
    Filed: June 12, 2015
    Publication date: March 17, 2016
    Inventors: Hoon CHUNG, Jeon Gue PARK, Sung Joo LEE, Yun Keun LEE
  • Patent number: 9288301
    Abstract: A smart watch in accordance with an embodiment of the present invention comprises: a first smart member configured to receive a voice signal sent from a mobile terminal, transform the input voice of a user to a voice signal, and send the voice signal to the mobile terminal while in talk mode; and a second smart member configured to input a control command about the talk mode into the first smart member, and transform the voice signal to voice and output the voice.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: March 15, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui-Sok Chung, Yun-Keun Lee, Jeon-Gue Park, Ho-Young Jung, Hoon Chung
  • Publication number: 20150221303
    Abstract: Provided are a discussion learning system enabling a discussion learning to proceed based on a speech recognition system without an instructor and a method using the same, the discussion learning system including an learning content providing server configured to provide a discussion environment, extract speeches of learners joining a discussion, and generate speech information based on the extracted speeches, and a speech recognition server configured to perform a speech recognition with respect to each of the learners based on the speech information, determine a progress of the discussion based on a result of the speech recognition, and provide the learning content providing server with interpretation information for smoothly continuing the discussion.
    Type: Application
    Filed: January 13, 2015
    Publication date: August 6, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeom Ja KANG, Hyung Bae JEON, Yun Keun LEE, Ho Young JUNG
  • Patent number: 9100492
    Abstract: Provided is a mobile communication terminal including: a camera module which captures an image of a set area; a microphone module which, when a sound including a voice of a user is input, extracts a sound level corresponding to the sound and a sound generating position; and a control module which estimates a position of a lip of the user from the image, extracts a voice level from the sound level corresponding to the position of the lip of the user and a voice generating position from the sound generating position, and recognizes the voice of the user based on at least one of the voice level and the voice generating position.
    Type: Grant
    Filed: September 4, 2013
    Date of Patent: August 4, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hwa Jeon Song, Ho Young Jung, Yun Keun Lee
  • Publication number: 20150012274
    Abstract: An apparatus for extracting features for speech recognition in accordance with the present invention includes: a frame forming portion configured to separate input speech signals in frame units having a prescribed size; a static feature extracting portion configured to extract a static feature vector for each frame of the speech signals; a dynamic feature extracting portion configured to extract a dynamic feature vector representing a temporal variance of the extracted static feature vector by use of a basis function or a basis vector; and a feature vector combining portion configured to combine the extracted static feature vector with the extracted dynamic feature vector to configure a feature vector stream.
    Type: Application
    Filed: May 15, 2014
    Publication date: January 8, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung-Joo LEE, Byung-Ok Kang, Hoon Chung, Ho-Young Jung, Hwa-Jeon Song, Yoo-Rhee Oh, Yun-Keun Lee
  • Publication number: 20150006175
    Abstract: The present invention relates to an apparatus and a method for recognizing continuous speech having large vocabulary. In the present invention, large vocabulary in large vocabulary continuous speech having a lot of same kinds of vocabulary is divided to a reasonable number of clusters, then representative vocabulary for pertinent clusters is selected and first recognition is performed with the representative vocabulary, then if the representative vocabulary is recognized by use of the result of first recognition, re-recognition is performed against all words in the cluster where the recognized representative vocabulary belongs.
    Type: Application
    Filed: June 13, 2014
    Publication date: January 1, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ki-Young PARK, Yun-Keun LEE, Hoon CHUNG
  • Publication number: 20140378185
    Abstract: A smart watch in accordance with an embodiment of the present invention comprises: a first smart member configured to receive a voice signal sent from a mobile terminal, transform the input voice of a user to a voice signal, and send the voice signal to the mobile terminal while in talk mode; and a second smart member configured to input a control command about the talk mode into the first smart member, and transform the voice signal to voice and output the voice.
    Type: Application
    Filed: March 27, 2014
    Publication date: December 25, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui-Sok CHUNG, Yun-Keun LEE, Jeon-Gue PARK, Ho-Young JUNG, Hoon CHUNG
  • Publication number: 20140379345
    Abstract: Disclosed are an apparatus and a method for detecting a speech endpoint using a WFST. The apparatus in accordance with an embodiment of the present invention includes: a speech decision portion configured to receive frame units of feature vector converted from a speech signal and to analyze and classify the received feature vector into a speech class or a noise class; a frame level WFST configured to receive the speech class and the noise class and to convert the speech class and the noise class to a WFST format; a speech level WFST configured to detect a speech endpoint by analyzing a relationship between the speech class and noise class and a preset state; a WFST combination portion configured to combine the frame level WFST with the speech level WFST; and an optimization portion configured to optimize the combined WFST having the frame level WFST and the speech level WFST combined therein to have a minimum route.
    Type: Application
    Filed: March 25, 2014
    Publication date: December 25, 2014
    Applicant: Electronic and Telecommunications Research Institute
    Inventors: Hoon CHUNG, Sung-Joo Lee, Yun-Keun Lee