Patents by Inventor Eui-Sok Chung

Eui-Sok Chung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160859
    Abstract: The present invention relates to a multi-modality system for recommending multiple items using an interaction and a method of operating the same. The multi-modality system includes an interaction data preprocessing module that preprocesses an interaction data set and converts the preprocessed interaction data set into interaction training data; an item data preprocessing module that preprocesses item information data and converts the preprocessed item information data into item training data; and a learning module that includes a neural network model that is trained using the interaction training data and the item training data and outputs a result including a set of recommended items using a conversation context with a user as input.
    Type: Application
    Filed: November 13, 2023
    Publication date: May 16, 2024
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok CHUNG, Hyun Woo KIM, Jeon Gue PARK, Hwa Jeon SONG, Jeong Min YANG, Byung Hyun YOO, Ran HAN
  • Patent number: 11423238
    Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: August 23, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok Chung, Hyun Woo Kim, Hwa Jeon Song, Ho Young Jung, Byung Ok Kang, Jeon Gue Park, Yoo Rhee Oh, Yun Keun Lee
  • Publication number: 20220180071
    Abstract: Provided are a system and method for adaptive masking and non-directional language understanding and generation. The system for adaptive masking and non-directional language understanding and generation according to the present invention includes an encoder unit including an adaptive masking block for performing masking on training data, a language generator for restoring masked words, and an encoder for detecting whether or not the restored sentence construction words are original, and a decoder unit including a generation word position detector for detecting a position of a word to be generated next, a language generator for determining a word suitable for the corresponding position, and a non-directional training data generator for decoder training.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 9, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui Sok CHUNG, Hyun Woo KIM, Gyeong Moon PARK, Jeon Gue PARK, Hwa Jeon SONG, Byung Hyun YOO, Ran HAN
  • Publication number: 20210398004
    Abstract: Provided are a method and apparatus for online Bayesian few-shot learning. The present invention provides a method and apparatus for online Bayesian few-shot learning in which multi-domain-based online learning and few-shot learning are integrated when domains of tasks having data are sequentially given.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 23, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Woo KIM, Gyeong Moon PARK, Jeon Gue PARK, Hwa Jeon SONG, Byung Hyun YOO, Eui Sok CHUNG, Ran HAN
  • Publication number: 20210374545
    Abstract: A knowledge increasing method includes calculating uncertainty of knowledge obtained from a neural network using an explicit memory, determining the insufficiency of the knowledge on the basis of the calculated uncertainty, obtaining additional data (learning data) for increasing insufficient knowledge, and training the neural network by using the additional data to autonomously increase knowledge.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 2, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Woo KIM, Jeon Gue PARK, Hwa Jeon SONG, Yoo Rhee OH, Byung Hyun YOO, Eui Sok CHUNG, Ran HAN
  • Publication number: 20210089904
    Abstract: The present invention provides a new learning method where regularization of a conventional model is reinforced by using an adversarial learning method. Also, a conventional method has a problem of word embedding having only a single meaning, but the present invention solves a problem of the related art by applying a self-attention model.
    Type: Application
    Filed: September 17, 2020
    Publication date: March 25, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Yoo Rhee OH, Byung Hyun YOO, Ran HAN
  • Patent number: 10929612
    Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: February 23, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ho Young Jung, Hyun Woo Kim, Hwa Jeon Song, Eui Sok Chung, Jeon Gue Park
  • Publication number: 20200219166
    Abstract: A method and apparatus for estimating a user's requirement through a neural network which are capable of reading and writing a working memory and for providing fashion coordination knowledge appropriate for the requirement through the neural network using a long-term memory, by using the neural network using an explicit memory, in order to accurately provide the fashion coordination knowledge. The apparatus includes a language embedding unit for embedding a user's question and a previously created answer to acquire a digitized embedding vector; a fashion coordination knowledge creation unit for creating fashion coordination through the neural network having the explicit memory by using the embedding vector as an input; and a dialog creation unit for creating dialog content for configuring the fashion coordination through the neural network having the explicit memory by using the fashion coordination knowledge and the embedding vector an input.
    Type: Application
    Filed: December 12, 2019
    Publication date: July 9, 2020
    Inventors: Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
  • Publication number: 20200175119
    Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.
    Type: Application
    Filed: November 1, 2019
    Publication date: June 4, 2020
    Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Ho Young JUNG, Byung Ok KANG, Jeon Gue PARK, Yoo Rhee OH, Yun Keun LEE
  • Publication number: 20190325025
    Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.
    Type: Application
    Filed: December 12, 2018
    Publication date: October 24, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ho Young JUNG, Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Jeon Gue PARK
  • Patent number: 10402494
    Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: September 3, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
  • Publication number: 20180157640
    Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
    Type: Application
    Filed: February 22, 2017
    Publication date: June 7, 2018
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui Sok CHUNG, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
  • Patent number: 9959862
    Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: May 1, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Byung Ok Kang, Jeon Gue Park, Hwa Jeon Song, Yun Keun Lee, Eui Sok Chung
  • Publication number: 20170206894
    Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
    Type: Application
    Filed: June 20, 2016
    Publication date: July 20, 2017
    Inventors: Byung Ok KANG, Jeon Gue PARK, Hwa Jeon SONG, Yun Keun LEE, Eui Sok CHUNG
  • Patent number: 9288301
    Abstract: A smart watch in accordance with an embodiment of the present invention comprises: a first smart member configured to receive a voice signal sent from a mobile terminal, transform the input voice of a user to a voice signal, and send the voice signal to the mobile terminal while in talk mode; and a second smart member configured to input a control command about the talk mode into the first smart member, and transform the voice signal to voice and output the voice.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: March 15, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Eui-Sok Chung, Yun-Keun Lee, Jeon-Gue Park, Ho-Young Jung, Hoon Chung
  • Publication number: 20150334443
    Abstract: A speech recognition broadcasting apparatus that uses a smart remote control and a controlling method thereof, the method including receiving a runtime resource for speech recognition from a speech recognition server; receiving a speech signal from the smart remote control; recognizing the speech signal based on the received runtime resource for speech recognition; transmitting a result of recognition of the speech signal to the smart remote control; receiving at least one of EPG (Electronic Program Guide) search information or control information of the speech recognition broadcasting apparatus that are based on the result of recognition from the smart remote control; and outputting a search screen or controlling the speech recognition broadcasting apparatus based on the EPG search information or control information of the speech recognition broadcasting apparatus.
    Type: Application
    Filed: February 3, 2015
    Publication date: November 19, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeon Gue PARK, Eui Sok CHUNG
  • Publication number: 20140378185
    Abstract: A smart watch in accordance with an embodiment of the present invention comprises: a first smart member configured to receive a voice signal sent from a mobile terminal, transform the input voice of a user to a voice signal, and send the voice signal to the mobile terminal while in talk mode; and a second smart member configured to input a control command about the talk mode into the first smart member, and transform the voice signal to voice and output the voice.
    Type: Application
    Filed: March 27, 2014
    Publication date: December 25, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui-Sok CHUNG, Yun-Keun LEE, Jeon-Gue PARK, Ho-Young JUNG, Hoon CHUNG
  • Publication number: 20140163986
    Abstract: Disclosed herein is a voice-based CAPTCHA method and apparatus which can perform a CAPTCHA procedure using the voice of a human being. In the voice-based CAPTCHA) method, a plurality of uttered sounds of a user are collected. A start point and an end point of a voice from each of the collected uttered sounds are detected and then speech sections are detected. Uttered sounds of the respective detected speech sections are compared with reference uttered sounds, and then it is determined whether the uttered sounds are correctly uttered sounds. It is determined whether the uttered sounds have been made by an identical speaker if it is determined that the uttered sounds are correctly uttered sounds.
    Type: Application
    Filed: December 3, 2013
    Publication date: June 12, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Sung-Joo LEE, Ho-Young JUNG, Hwa-Jeon SONG, Eui-Sok CHUNG, Byung-Ok KANG, Hoon CHUNG, Jeon-Gue PARK, Hyung-Bae JEON, Yoo-Rhee OH, Yun-Keun LEE
  • Publication number: 20140129233
    Abstract: Disclosed is apparatus and system for user interface. The apparatus for user interface comprises a body unit including a groove which is corresponding to a structure of an oral cavity and operable to be mounted on upper part of the oral cavity; a user input unit receiving a signal from the user's tongue in a part of the body unit; a communication unit transmitting the signal received from the user input unit; and a charging unit supplying an electrical energy generated from vibration or pressure caused by movement of the user's tongue.
    Type: Application
    Filed: March 29, 2013
    Publication date: May 8, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eui Sok CHUNG, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG, Jeom Ja KANG
  • Patent number: 8666739
    Abstract: Method of the present invention may include receiving speech feature vector converted from speech signal, performing first search by applying first language model to the received speech feature vector, and outputting word lattice and first acoustic score of the word lattice as continuous speech recognition result, outputting second acoustic score as phoneme recognition result by applying an acoustic model to the speech feature vector, comparing the first acoustic score of the continuous speech recognition result with the second acoustic score of the phoneme recognition result, outputting first language model weight when the first coustic score of the continuous speech recognition result is better than the second acoustic score of the phoneme recognition result and performing a second search by applying a second language model weight, which is the same as the output first language model, to the word lattice.
    Type: Grant
    Filed: December 13, 2011
    Date of Patent: March 4, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hyung Bae Jeon, Yun Keun Lee, Eui Sok Chung, Jong Jin Kim, Hoon Chung, Jeon Gue Park, Ho Young Jung, Byung Ok Kang, Ki Young Park, Sung Joo Lee, Jeom Ja Kang, Hwa Jeon Song