Patents by Inventor Eui-Sok Chung
Eui-Sok Chung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240232648Abstract: Disclosed herein are a multimodal unsupervised meta-learning method and apparatus. The multimodal unsupervised meta-learning method includes training, by a multimodal unsupervised feature representation learning unit, an encoder configured to extract features of individual single-modal signals from a source multimodal dataset, generating, by a multimodal unsupervised task generation unit, a source task based on the features of individual single-modal signals, deriving, by a multimodal unsupervised learning method derivation unit, a learning method from the source task using the encoder, and training, by a target task performance unit, a model based on the learning method and features extracted from a small number of target datasets by the encoder, thus performing the target task.Type: ApplicationFiled: December 12, 2023Publication date: July 11, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyun-Woo KIM, Hwa-Jeon SONG, Jeong-Min YANG, Byung-Hyun YOO, Eui-Sok CHUNG, Ran HAN
-
Publication number: 20240160859Abstract: The present invention relates to a multi-modality system for recommending multiple items using an interaction and a method of operating the same. The multi-modality system includes an interaction data preprocessing module that preprocesses an interaction data set and converts the preprocessed interaction data set into interaction training data; an item data preprocessing module that preprocesses item information data and converts the preprocessed item information data into item training data; and a learning module that includes a neural network model that is trained using the interaction training data and the item training data and outputs a result including a set of recommended items using a conversation context with a user as input.Type: ApplicationFiled: November 13, 2023Publication date: May 16, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok CHUNG, Hyun Woo KIM, Jeon Gue PARK, Hwa Jeon SONG, Jeong Min YANG, Byung Hyun YOO, Ran HAN
-
Patent number: 11423238Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.Type: GrantFiled: November 1, 2019Date of Patent: August 23, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Hyun Woo Kim, Hwa Jeon Song, Ho Young Jung, Byung Ok Kang, Jeon Gue Park, Yoo Rhee Oh, Yun Keun Lee
-
Publication number: 20220180071Abstract: Provided are a system and method for adaptive masking and non-directional language understanding and generation. The system for adaptive masking and non-directional language understanding and generation according to the present invention includes an encoder unit including an adaptive masking block for performing masking on training data, a language generator for restoring masked words, and an encoder for detecting whether or not the restored sentence construction words are original, and a decoder unit including a generation word position detector for detecting a position of a word to be generated next, a language generator for determining a word suitable for the corresponding position, and a non-directional training data generator for decoder training.Type: ApplicationFiled: December 2, 2021Publication date: June 9, 2022Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Hyun Woo KIM, Gyeong Moon PARK, Jeon Gue PARK, Hwa Jeon SONG, Byung Hyun YOO, Ran HAN
-
Publication number: 20210398004Abstract: Provided are a method and apparatus for online Bayesian few-shot learning. The present invention provides a method and apparatus for online Bayesian few-shot learning in which multi-domain-based online learning and few-shot learning are integrated when domains of tasks having data are sequentially given.Type: ApplicationFiled: June 21, 2021Publication date: December 23, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Hyun Woo KIM, Gyeong Moon PARK, Jeon Gue PARK, Hwa Jeon SONG, Byung Hyun YOO, Eui Sok CHUNG, Ran HAN
-
Publication number: 20210374545Abstract: A knowledge increasing method includes calculating uncertainty of knowledge obtained from a neural network using an explicit memory, determining the insufficiency of the knowledge on the basis of the calculated uncertainty, obtaining additional data (learning data) for increasing insufficient knowledge, and training the neural network by using the additional data to autonomously increase knowledge.Type: ApplicationFiled: May 27, 2021Publication date: December 2, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Hyun Woo KIM, Jeon Gue PARK, Hwa Jeon SONG, Yoo Rhee OH, Byung Hyun YOO, Eui Sok CHUNG, Ran HAN
-
Publication number: 20210089904Abstract: The present invention provides a new learning method where regularization of a conventional model is reinforced by using an adversarial learning method. Also, a conventional method has a problem of word embedding having only a single meaning, but the present invention solves a problem of the related art by applying a self-attention model.Type: ApplicationFiled: September 17, 2020Publication date: March 25, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Yoo Rhee OH, Byung Hyun YOO, Ran HAN
-
Patent number: 10929612Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.Type: GrantFiled: December 12, 2018Date of Patent: February 23, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ho Young Jung, Hyun Woo Kim, Hwa Jeon Song, Eui Sok Chung, Jeon Gue Park
-
Publication number: 20200219166Abstract: A method and apparatus for estimating a user's requirement through a neural network which are capable of reading and writing a working memory and for providing fashion coordination knowledge appropriate for the requirement through the neural network using a long-term memory, by using the neural network using an explicit memory, in order to accurately provide the fashion coordination knowledge. The apparatus includes a language embedding unit for embedding a user's question and a previously created answer to acquire a digitized embedding vector; a fashion coordination knowledge creation unit for creating fashion coordination through the neural network having the explicit memory by using the embedding vector as an input; and a dialog creation unit for creating dialog content for configuring the fashion coordination through the neural network having the explicit memory by using the fashion coordination knowledge and the embedding vector an input.Type: ApplicationFiled: December 12, 2019Publication date: July 9, 2020Inventors: Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
-
Publication number: 20200175119Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.Type: ApplicationFiled: November 1, 2019Publication date: June 4, 2020Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Ho Young JUNG, Byung Ok KANG, Jeon Gue PARK, Yoo Rhee OH, Yun Keun LEE
-
Publication number: 20190325025Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.Type: ApplicationFiled: December 12, 2018Publication date: October 24, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Ho Young JUNG, Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Jeon Gue PARK
-
Patent number: 10402494Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: GrantFiled: February 22, 2017Date of Patent: September 3, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Publication number: 20180157640Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: ApplicationFiled: February 22, 2017Publication date: June 7, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Patent number: 9959862Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.Type: GrantFiled: June 20, 2016Date of Patent: May 1, 2018Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung Ok Kang, Jeon Gue Park, Hwa Jeon Song, Yun Keun Lee, Eui Sok Chung
-
Publication number: 20170206894Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.Type: ApplicationFiled: June 20, 2016Publication date: July 20, 2017Inventors: Byung Ok KANG, Jeon Gue PARK, Hwa Jeon SONG, Yun Keun LEE, Eui Sok CHUNG
-
Patent number: 9288301Abstract: A smart watch in accordance with an embodiment of the present invention comprises: a first smart member configured to receive a voice signal sent from a mobile terminal, transform the input voice of a user to a voice signal, and send the voice signal to the mobile terminal while in talk mode; and a second smart member configured to input a control command about the talk mode into the first smart member, and transform the voice signal to voice and output the voice.Type: GrantFiled: March 27, 2014Date of Patent: March 15, 2016Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui-Sok Chung, Yun-Keun Lee, Jeon-Gue Park, Ho-Young Jung, Hoon Chung
-
Publication number: 20150334443Abstract: A speech recognition broadcasting apparatus that uses a smart remote control and a controlling method thereof, the method including receiving a runtime resource for speech recognition from a speech recognition server; receiving a speech signal from the smart remote control; recognizing the speech signal based on the received runtime resource for speech recognition; transmitting a result of recognition of the speech signal to the smart remote control; receiving at least one of EPG (Electronic Program Guide) search information or control information of the speech recognition broadcasting apparatus that are based on the result of recognition from the smart remote control; and outputting a search screen or controlling the speech recognition broadcasting apparatus based on the EPG search information or control information of the speech recognition broadcasting apparatus.Type: ApplicationFiled: February 3, 2015Publication date: November 19, 2015Applicant: Electronics and Telecommunications Research InstituteInventors: Jeon Gue PARK, Eui Sok CHUNG
-
Publication number: 20140378185Abstract: A smart watch in accordance with an embodiment of the present invention comprises: a first smart member configured to receive a voice signal sent from a mobile terminal, transform the input voice of a user to a voice signal, and send the voice signal to the mobile terminal while in talk mode; and a second smart member configured to input a control command about the talk mode into the first smart member, and transform the voice signal to voice and output the voice.Type: ApplicationFiled: March 27, 2014Publication date: December 25, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Eui-Sok CHUNG, Yun-Keun LEE, Jeon-Gue PARK, Ho-Young JUNG, Hoon CHUNG
-
Publication number: 20140163986Abstract: Disclosed herein is a voice-based CAPTCHA method and apparatus which can perform a CAPTCHA procedure using the voice of a human being. In the voice-based CAPTCHA) method, a plurality of uttered sounds of a user are collected. A start point and an end point of a voice from each of the collected uttered sounds are detected and then speech sections are detected. Uttered sounds of the respective detected speech sections are compared with reference uttered sounds, and then it is determined whether the uttered sounds are correctly uttered sounds. It is determined whether the uttered sounds have been made by an identical speaker if it is determined that the uttered sounds are correctly uttered sounds.Type: ApplicationFiled: December 3, 2013Publication date: June 12, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Sung-Joo LEE, Ho-Young JUNG, Hwa-Jeon SONG, Eui-Sok CHUNG, Byung-Ok KANG, Hoon CHUNG, Jeon-Gue PARK, Hyung-Bae JEON, Yoo-Rhee OH, Yun-Keun LEE
-
Publication number: 20140129233Abstract: Disclosed is apparatus and system for user interface. The apparatus for user interface comprises a body unit including a groove which is corresponding to a structure of an oral cavity and operable to be mounted on upper part of the oral cavity; a user input unit receiving a signal from the user's tongue in a part of the body unit; a communication unit transmitting the signal received from the user input unit; and a charging unit supplying an electrical energy generated from vibration or pressure caused by movement of the user's tongue.Type: ApplicationFiled: March 29, 2013Publication date: May 8, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG, Jeom Ja KANG