Patents by Inventor Jeon Gue Park
Jeon Gue Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230274127Abstract: A concept based few-shot learning method is disclosed. The method includes estimating a task embedding corresponding to a task to be executed from support data that is a small amount of learning data; calculating a slot probability of a concept memory necessary for a task based on the task embedding; extracting features of query data that is test data, and of the support data; comparing local features for the extracted features with slots of a concept memory to extract a concept, and generating synthesis features to have maximum similarity to the extracted features through the slots of the concept memory; and calculating a task execution result from the synthesis feature and the extracted concept by applying the slot probability as a weight.Type: ApplicationFiled: December 23, 2022Publication date: August 31, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyun Woo KIM, Jeon Gue PARK, Hwajeon SONG, Jeongmin YANG, Byunghyun YOO, Euisok CHUNG, Ran HAN
-
Publication number: 20230186154Abstract: An exploration method used by an exploration apparatus in multi-agent reinforcement learning to collect training samples during the training process is provided. The exploration method includes calculating the influence of a selected action of each agent on the actions of other agents in a current state, calculating a linear sum of the value of a utility function representing the action value of each agent and the influence on the actions of the other agent calculated for the selected action of each agent, and obtaining a sample to be used for training an action policy of each agent by probabilistically selecting the action in which the linear sum is the maximum, and the random action.Type: ApplicationFiled: August 23, 2022Publication date: June 15, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byunghyun YOO, Hyun Woo KIM, Jeon Gue PARK, Hwa Jeon SONG, Jeongmin YANG, Sungwon YI, Euisok CHUNG, Ran HAN
-
Publication number: 20230087477Abstract: The present disclosure relates to an apparatus and method for separating voice sections from each other. Various embodiments are directed to providing an apparatus and method for separating voice sections from each other, which can maximize speaker separation performance for a short voice section by dividing a short voice section having low speaker separation reliability and separating multiple speakers from one another.Type: ApplicationFiled: July 22, 2022Publication date: March 23, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Woo Yong CHOI, Jeon Gue PARK
-
Publication number: 20230061505Abstract: The present invention relates to a method of training data augmentation for end-to-end speech recognition. The method for training data augmentation for end-to-end speech recognition includes: combining speech augmentation data and text augmentation data; performing a dynamic augmentation process on each of the speech augmentation data and the text augmentation data that have been combined; and training the end-to-end speech recognition using the speech augmentation data and the text augmentation data that are subjected to the dynamic augmentation process.Type: ApplicationFiled: August 11, 2022Publication date: March 2, 2023Applicant: Electronics and Telecommunications Research InstituteInventors: Yoo Rhee OH, Ki Young PARK, Jeon Gue PARK
-
Publication number: 20230009771Abstract: Disclosed herein is a method for data augmentation, which includes pretraining latent variables using first data corresponding to target speech and second data corresponding to general speech, training data augmentation parameters by receiving the first data and the second data as input, and augmenting target data using the first data and the second data through the pretrained latent variables and the trained parameters.Type: ApplicationFiled: July 1, 2022Publication date: January 12, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung-Ok KANG, Jeon-Gue PARK, Hyung-Bae JEON
-
Patent number: 11526732Abstract: Provided are an apparatus and method for a statistical memory network. The apparatus includes a stochastic memory, an uncertainty estimator configured to estimate uncertainty information of external input signals from the input signals and provide the uncertainty information of the input signals, a writing controller configured to generate parameters for writing in the stochastic memory using the external input signals and the uncertainty information and generate additional statistics by converting statistics of the external input signals, a writing probability calculator configured to calculate a probability of a writing position of the stochastic memory using the parameters for writing, and a statistic updater configured to update stochastic values composed of an average and a variance of signals in the stochastic memory using the probability of a writing position, the parameters for writing, and the additional statistics.Type: GrantFiled: January 29, 2019Date of Patent: December 13, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee
-
Patent number: 11423238Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.Type: GrantFiled: November 1, 2019Date of Patent: August 23, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Hyun Woo Kim, Hwa Jeon Song, Ho Young Jung, Byung Ok Kang, Jeon Gue Park, Yoo Rhee Oh, Yun Keun Lee
-
Publication number: 20220180071Abstract: Provided are a system and method for adaptive masking and non-directional language understanding and generation. The system for adaptive masking and non-directional language understanding and generation according to the present invention includes an encoder unit including an adaptive masking block for performing masking on training data, a language generator for restoring masked words, and an encoder for detecting whether or not the restored sentence construction words are original, and a decoder unit including a generation word position detector for detecting a position of a word to be generated next, a language generator for determining a word suitable for the corresponding position, and a non-directional training data generator for decoder training.Type: ApplicationFiled: December 2, 2021Publication date: June 9, 2022Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Hyun Woo KIM, Gyeong Moon PARK, Jeon Gue PARK, Hwa Jeon SONG, Byung Hyun YOO, Ran HAN
-
Publication number: 20210398004Abstract: Provided are a method and apparatus for online Bayesian few-shot learning. The present invention provides a method and apparatus for online Bayesian few-shot learning in which multi-domain-based online learning and few-shot learning are integrated when domains of tasks having data are sequentially given.Type: ApplicationFiled: June 21, 2021Publication date: December 23, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Hyun Woo KIM, Gyeong Moon PARK, Jeon Gue PARK, Hwa Jeon SONG, Byung Hyun YOO, Eui Sok CHUNG, Ran HAN
-
Publication number: 20210374545Abstract: A knowledge increasing method includes calculating uncertainty of knowledge obtained from a neural network using an explicit memory, determining the insufficiency of the knowledge on the basis of the calculated uncertainty, obtaining additional data (learning data) for increasing insufficient knowledge, and training the neural network by using the additional data to autonomously increase knowledge.Type: ApplicationFiled: May 27, 2021Publication date: December 2, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Hyun Woo KIM, Jeon Gue PARK, Hwa Jeon SONG, Yoo Rhee OH, Byung Hyun YOO, Eui Sok CHUNG, Ran HAN
-
Patent number: 10929612Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.Type: GrantFiled: December 12, 2018Date of Patent: February 23, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ho Young Jung, Hyun Woo Kim, Hwa Jeon Song, Eui Sok Chung, Jeon Gue Park
-
Patent number: 10789332Abstract: Provided are an apparatus and method for linearly approximating a deep neural network (DNN) model which is a non-linear function. In general, a DNN model shows good performance in generation or classification tasks. However, the DNN fundamentally has non-linear characteristics, and therefore it is difficult to interpret how a result from inputs given to a black box model has been derived. To solve this problem, linear approximation of a DNN is proposed. The method for linearly approximating a DNN model includes 1) converting a neuron constituting a DNN into a polynomial, and 2) classifying the obtained polynomial as a polynomial of input signals and a polynomial of weights.Type: GrantFiled: September 5, 2018Date of Patent: September 29, 2020Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hoon Chung, Jeon Gue Park, Sung Joo Lee, Yun Keun Lee
-
Publication number: 20200219166Abstract: A method and apparatus for estimating a user's requirement through a neural network which are capable of reading and writing a working memory and for providing fashion coordination knowledge appropriate for the requirement through the neural network using a long-term memory, by using the neural network using an explicit memory, in order to accurately provide the fashion coordination knowledge. The apparatus includes a language embedding unit for embedding a user's question and a previously created answer to acquire a digitized embedding vector; a fashion coordination knowledge creation unit for creating fashion coordination through the neural network having the explicit memory by using the embedding vector as an input; and a dialog creation unit for creating dialog content for configuring the fashion coordination through the neural network having the explicit memory by using the fashion coordination knowledge and the embedding vector an input.Type: ApplicationFiled: December 12, 2019Publication date: July 9, 2020Inventors: Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
-
Publication number: 20200184310Abstract: Provided is an apparatus and method for reducing the number of deep neural network model parameters, the apparatus including a memory in which a program for DNN model parameter reduction is stored, and a processor configured to execute the program, wherein the processor represents hidden layers of the model of the DNN using a full-rank decomposed matrix, uses training that is employed with a sparsity constraint for converting a diagonal matrix value to zero, and determines a rank of each of the hidden layers of the model of the DNN according to a degree of the sparsity constraint.Type: ApplicationFiled: December 11, 2019Publication date: June 11, 2020Inventors: Hoon CHUNG, Jeon Gue PARK, Yun Keun LEE
-
Publication number: 20200175119Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning.Type: ApplicationFiled: November 1, 2019Publication date: June 4, 2020Inventors: Eui Sok CHUNG, Hyun Woo KIM, Hwa Jeon SONG, Ho Young JUNG, Byung Ok KANG, Jeon Gue PARK, Yoo Rhee OH, Yun Keun LEE
-
Publication number: 20190325025Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.Type: ApplicationFiled: December 12, 2018Publication date: October 24, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Ho Young JUNG, Hyun Woo KIM, Hwa Jeon SONG, Eui Sok CHUNG, Jeon Gue PARK
-
Publication number: 20190318228Abstract: Provided are an apparatus and method for a statistical memory network. The apparatus includes a stochastic memory, an uncertainty estimator configured to estimate uncertainty information of external input signals from the input signals and provide the uncertainty information of the input signals, a writing controller configured to generate parameters for writing in the stochastic memory using the external input signals and the uncertainty information and generate additional statistics by converting statistics of the external input signals, a writing probability calculator configured to calculate a probability of a writing position of the stochastic memory using the parameters for writing, and a statistic updater configured to update stochastic values composed of an average and a variance of signals in the stochastic memory using the probability of a writing position, the parameters for writing, and the additional statistics.Type: ApplicationFiled: January 29, 2019Publication date: October 17, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Hyun Woo KIM, Ho Young JUNG, Jeon Gue PARK, Yun Keun LEE
-
Publication number: 20190272309Abstract: Provided are an apparatus and method for linearly approximating a deep neural network (DNN) model which is a non-linear function. In general, a DNN model shows good performance in generation or classification tasks. However, the DNN fundamentally has non-linear characteristics, and therefore it is difficult to interpret how a result from inputs given to a black box model has been derived. To solve this problem, linear approximation of a DNN is proposed. The method for linearly approximating a DNN model includes 1) converting a neuron constituting a DNN into a polynomial, and 2) classifying the obtained polynomial as a polynomial of input signals and a polynomial of weights.Type: ApplicationFiled: September 5, 2018Publication date: September 5, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Hoon Chung, Jeon Gue Park, Sung Joo Lee, Yun Keun Lee
-
Patent number: 10402494Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: GrantFiled: February 22, 2017Date of Patent: September 3, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Patent number: 10388275Abstract: The present invention relates to a method and apparatus for improving spontaneous speech recognition performance. The present invention is directed to providing a method and apparatus for improving spontaneous speech recognition performance by extracting a phase feature as well as a magnitude feature of a voice signal transformed to the frequency domain, detecting a syllabic nucleus on the basis of a deep neural network using a multi-frame output, determining a speaking rate by dividing the number of syllabic nuclei by a voice section interval detected by a voice detector, calculating a length variation or an overlap factor according to the speaking rate, and performing cepstrum length normalization or time scale modification with a voice length appropriate for an acoustic model.Type: GrantFiled: September 7, 2017Date of Patent: August 20, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee