Patents by Inventor Yun Keun Lee

Yun Keun Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11950502
    Abstract: Provided is a novel compound capable of improving the luminous efficiency, stability and life span of a device, an organic electric element using the same, and an electronic device thereof.
    Type: Grant
    Filed: April 17, 2021
    Date of Patent: April 2, 2024
    Assignee: DUK SAN NEOLUX CO., LTD.
    Inventors: Hyoung Keun Park, Yun Suk Lee, Ki Ho So, Jong Gwang Park, Yeon Seok Jeong, Jung Hwan Park, Sun Hee Lee, Hak Young Lee
  • Patent number: 11551012
    Abstract: Provided are an apparatus and method for providing a personal assistant service based on automatic translation. The apparatus for providing a personal assistant service based on automatic translation includes an input section configured to receive a command of a user, a memory in which a program for providing a personal assistant service according to the command of the user is stored, and a processor configured to execute the program. The processor updates at least one of a speech recognition model, an automatic interpretation model, and an automatic translation model on the basis of an intention of the command of the user using a recognition result of the command of the user and provides the personal assistant service on the basis of an automatic translation call.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: January 10, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Yun, Sang Hun Kim, Min Kyu Lee, Yun Keun Lee, Mu Yeol Choi, Yeo Jeong Kim, Sang Kyu Park
  • Patent number: 11526732
    Abstract: Provided are an apparatus and method for a statistical memory network. The apparatus includes a stochastic memory, an uncertainty estimator configured to estimate uncertainty information of external input signals from the input signals and provide the uncertainty information of the input signals, a writing controller configured to generate parameters for writing in the stochastic memory using the external input signals and the uncertainty information and generate additional statistics by converting statistics of the external input signals, a writing probability calculator configured to calculate a probability of a writing position of the stochastic memory using the parameters for writing, and a statistic updater configured to update stochastic values composed of an average and a variance of signals in the stochastic memory using the probability of a writing position, the parameters for writing, and the additional statistics.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: December 13, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee
  • Patent number: 11423238
    Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: August 23, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok Chung, Hyun Woo Kim, Hwa Jeon Song, Ho Young Jung, Byung Ok Kang, Jeon Gue Park, Yoo Rhee Oh, Yun Keun Lee
  • Publication number: 20210004542
    Abstract: Provided are an apparatus and method for providing a personal assistant service based on automatic translation. The apparatus for providing a personal assistant service based on automatic translation includes an input section configured to receive a command of a user, a memory in which a program for providing a personal assistant service according to the command of the user is stored, and a processor configured to execute the program. The processor updates at least one of a speech recognition model, an automatic interpretation model, and an automatic translation model on the basis of an intention of the command of the user using a recognition result of the command of the user and provides the personal assistant service on the basis of an automatic translation call.
    Type: Application
    Filed: July 2, 2020
    Publication date: January 7, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung YUN, Sang Hun KIM, Min Kyu LEE, Yun Keun LEE, Mu Yeol CHOI, Yeo Jeong KIM, Sang Kyu PARK
  • Patent number: 10789332
    Abstract: Provided are an apparatus and method for linearly approximating a deep neural network (DNN) model which is a non-linear function. In general, a DNN model shows good performance in generation or classification tasks. However, the DNN fundamentally has non-linear characteristics, and therefore it is difficult to interpret how a result from inputs given to a black box model has been derived. To solve this problem, linear approximation of a DNN is proposed. The method for linearly approximating a DNN model includes 1) converting a neuron constituting a DNN into a polynomial, and 2) classifying the obtained polynomial as a polynomial of input signals and a polynomial of weights.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: September 29, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon Chung, Jeon Gue Park, Sung Joo Lee, Yun Keun Lee
  • Publication number: 20200219166
    Abstract: A method and apparatus for estimating a user's requirement through a neural network which are capable of reading and writing a working memory and for providing fashion coordination knowledge appropriate for the requirement through the neural network using a long-term memory, by using the neural network using an explicit memory, in order to accurately provide the fashion coordination knowledge. The apparatus includes a language embedding unit for embedding a user's question and a previously created answer to acquire a digitized embedding vector; a fashion coordination knowledge creation unit for creating fashion coordination through the neural network having the explicit memory by using the embedding vector as an input; and a dialog creation unit for creating dialog content for configuring the fashion coordination through the neural network having the explicit memory by using the fashion coordination knowledge and the embedding vector an input.
    Type: Application
    Filed: December 12, 2019
    Publication date: July 9, 2020
    Inventors: Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
  • Publication number: 20200184310
    Abstract: Provided is an apparatus and method for reducing the number of deep neural network model parameters, the apparatus including a memory in which a program for DNN model parameter reduction is stored, and a processor configured to execute the program, wherein the processor represents hidden layers of the model of the DNN using a full-rank decomposed matrix, uses training that is employed with a sparsity constraint for converting a diagonal matrix value to zero, and determines a rank of each of the hidden layers of the model of the DNN according to a degree of the sparsity constraint.
    Type: Application
    Filed: December 11, 2019
    Publication date: June 11, 2020
    Inventors: Hoon CHUNG, Jeon Gue PARK, Yun Keun LEE
  • Publication number: 20200175119
    Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.
    Type: Application
    Filed: November 1, 2019
    Publication date: June 4, 2020
    Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Ho Young JUNG, Byung Ok KANG, Jeon Gue PARK, Yoo Rhee OH, Yun Keun LEE
  • Publication number: 20190318228
    Abstract: Provided are an apparatus and method for a statistical memory network. The apparatus includes a stochastic memory, an uncertainty estimator configured to estimate uncertainty information of external input signals from the input signals and provide the uncertainty information of the input signals, a writing controller configured to generate parameters for writing in the stochastic memory using the external input signals and the uncertainty information and generate additional statistics by converting statistics of the external input signals, a writing probability calculator configured to calculate a probability of a writing position of the stochastic memory using the parameters for writing, and a statistic updater configured to update stochastic values composed of an average and a variance of signals in the stochastic memory using the probability of a writing position, the parameters for writing, and the additional statistics.
    Type: Application
    Filed: January 29, 2019
    Publication date: October 17, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Woo KIM, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
  • Publication number: 20190272309
    Abstract: Provided are an apparatus and method for linearly approximating a deep neural network (DNN) model which is a non-linear function. In general, a DNN model shows good performance in generation or classification tasks. However, the DNN fundamentally has non-linear characteristics, and therefore it is difficult to interpret how a result from inputs given to a black box model has been derived. To solve this problem, linear approximation of a DNN is proposed. The method for linearly approximating a DNN model includes 1) converting a neuron constituting a DNN into a polynomial, and 2) classifying the obtained polynomial as a polynomial of input signals and a polynomial of weights.
    Type: Application
    Filed: September 5, 2018
    Publication date: September 5, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hoon Chung, Jeon Gue Park, Sung Joo Lee, Yun Keun Lee
  • Patent number: 10402494
    Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: September 3, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
  • Patent number: 10388275
    Abstract: The present invention relates to a method and apparatus for improving spontaneous speech recognition performance. The present invention is directed to providing a method and apparatus for improving spontaneous speech recognition performance by extracting a phase feature as well as a magnitude feature of a voice signal transformed to the frequency domain, detecting a syllabic nucleus on the basis of a deep neural network using a multi-frame output, determining a speaking rate by dividing the number of syllabic nuclei by a voice section interval detected by a voice detector, calculating a length variation or an overlap factor according to the speaking rate, and performing cepstrum length normalization or time scale modification with a voice length appropriate for an acoustic model.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: August 20, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee
  • Patent number: 10366173
    Abstract: The present invention relates to a device of simultaneous interpretation based on real-time extraction of an interpretation unit, the device including a voice recognition module configured to recognize voice units as sentence units or translation units from vocalized speech that is input in real time, a real-time interpretation unit extraction module configured to form one or more of the voice units into an interpretation unit, and a real-time interpretation module configured to perform an interpretation task for each interpretation unit formed by the real-time interpretation unit extraction module.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: July 30, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chang Hyun Kim, Young Kil Kim, Yun Keun Lee
  • Patent number: 10332033
    Abstract: An incremental self-learning based dialogue apparatus for dialogue knowledge includes a dialogue processing unit configured to determine a intention of a user utterance by using a knowledge base and perform processing or a response suitable for the user intention, a dialogue establishment unit configured to automatically learn a user intention stored in a intention annotated learning corpus, store information about the learned user intention in the knowledge base, and edit and manage the knowledge base and the intention annotated learning corpus, and a self-knowledge augmentation unit configured to store a log of a dialogue performed by the dialogue processing unit, detect and classify an error in the stored dialogue log, automatically tag a user intention for the detected and classified error, and store the tagged user intention in the intention annotated learning corpus.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: June 25, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Oh Woog Kwon, Young Kil Kim, Yun Keun Lee
  • Patent number: 10089979
    Abstract: Provided are a signal processing algorithm-integrated deep neural network (DNN)-based speech recognition apparatus and a learning method thereof. A model parameter learning method in a deep neural network (DNN)-based speech recognition apparatus implementable by a computer includes converting a signal processing algorithm for extracting a feature parameter from a speech input signal of a time domain into signal processing deep neural network (DNN), fusing the signal processing DNN and a classification DNN, and learning a model parameter in a deep learning model in which the signal processing DNN and the classification DNN are fused.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: October 2, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon Chung, Jeon Gue Park, Sung Joo Lee, Yun Keun Lee
  • Publication number: 20180268739
    Abstract: Provided are end-to-end method and system for grading foreign language fluency, in which a multi-step intermediate process of grading foreign language fluency in the related art is omitted. The method provides an end-to-end foreign language fluency grading method of grading a foreign language fluency of a non-native speaker from a non-native raw speech signal, and includes inputting the raw speech to a convolution neural network (CNN), training a filter coefficient of the CNN based on a fluency grading score calculated by a human rater for the raw signal so as to generate a foreign language fluency grading model, and grading foreign language fluency for a non-native speech signal newly input to the trained CNN by using the foreign language fluency grading model to output a grading result.
    Type: Application
    Filed: September 20, 2017
    Publication date: September 20, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hoon CHUNG, Jeon Gue PARK, Yoo Rhee OH, Yun Kyung LEE, Yun Keun LEE
  • Publication number: 20180247642
    Abstract: The present invention relates to a method and apparatus for improving spontaneous speech recognition performance. The present invention is directed to providing a method and apparatus for improving spontaneous speech recognition performance by extracting a phase feature as well as a magnitude feature of a voice signal transformed to the frequency domain, detecting a syllabic nucleus on the basis of a deep neural network using a multi-frame output, determining a speaking rate by dividing the number of syllabic nuclei by a voice section interval detected by a voice detector, calculating a length variation or an overlap factor according to the speaking rate, and performing cepstrum length normalization or time scale modification with a voice length appropriate for an acoustic model.
    Type: Application
    Filed: September 7, 2017
    Publication date: August 30, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Woo KIM, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
  • Publication number: 20180165578
    Abstract: Provided are an apparatus and method for compressing a deep neural network (DNN). The DNN compression method includes receiving a matrix of a hidden layer or an output layer of a DNN, calculating a matrix representing a nonlinear structure of the hidden layer or the output layer, and decomposing the matrix of the hidden layer or the output layer using a constraint imposed by the matrix representing the nonlinear structure.
    Type: Application
    Filed: April 4, 2017
    Publication date: June 14, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hoon CHUNG, Jeon Gue PARK, Sung Joo LEE, Yun Keun LEE
  • Publication number: 20180166071
    Abstract: Provided are a method of automatically classifying a speaking rate and a speech recognition system using the method. The speech recognition system using automatic speaking rate classification includes a speech recognizer configured to extract word lattice information by performing speech recognition on an input speech signal, a speaking rate estimator configured to estimate word-specific speaking rates using the word lattice information, a speaking rate normalizer configured to normalize a word-specific speaking rate into a normal speaking rate when the word-specific speaking rate deviates from a preset range, and a rescoring section configured to rescore the speech signal whose speaking rate has been normalized.
    Type: Application
    Filed: May 30, 2017
    Publication date: June 14, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Sung Joo LEE, Jeon Gue PARK, Yun Keun LEE, Hoon CHUNG