Patents by Inventor Hyung Bae Jeon
Hyung Bae Jeon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240093401Abstract: A method of manufacturing a multilayer metal plate by electroplating includes a first forming operation of forming one of a first metal layer and a second metal layer on a substrate by electroplating, wherein the second metal layer is less recrystallized than the first metal layer, the second metal layer is comprised of nanometer-size grains, and the second metal layer has a higher level of tensile strength than the first metal layer; and a second forming operation of forming, by electroplating, a third metal layer not formed in the first forming operation on a surface of one of the first metal layer and the second metal layer formed in the first forming operation.Type: ApplicationFiled: September 18, 2023Publication date: March 21, 2024Applicant: DONG-A UNIVERSITY RESEARCH FOUNDATION FOR INDUSTRY-ACADEMY COOPERATIONInventors: Hyun PARK, Sung Jin KIM, Han Kyun SHIN, Hyo Jong LEE, Jong Bae JEON, Jung Han KIM, An Na LEE, Tae Hyun KIM, Hyung Won CHO
-
Publication number: 20230134942Abstract: Disclosed herein are an apparatus and method for self-supervised training of an end-to-end speech recognition model. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program trains an end-to-end speech recognition model, including an encoder and a decoder, using untranscribed speech data. The program may add predetermined noise to the input signal of the end-to-end speech recognition model, and may calculate loss by reflecting a predetermined constraint based on the output of the encoder of the end-to-end speech recognition model.Type: ApplicationFiled: October 7, 2022Publication date: May 4, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hoon CHUNG, Byung-Ok KANG, Jeom-Ja KANG, Yun-Kyung LEE, Hyung-Bae JEON
-
Publication number: 20230009771Abstract: Disclosed herein is a method for data augmentation, which includes pretraining latent variables using first data corresponding to target speech and second data corresponding to general speech, training data augmentation parameters by receiving the first data and the second data as input, and augmenting target data using the first data and the second data through the pretrained latent variables and the trained parameters.Type: ApplicationFiled: July 1, 2022Publication date: January 12, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung-Ok KANG, Jeon-Gue PARK, Hyung-Bae JEON
-
Patent number: 10402494Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: GrantFiled: February 22, 2017Date of Patent: September 3, 2019Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui Sok Chung, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Publication number: 20180157640Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.Type: ApplicationFiled: February 22, 2017Publication date: June 7, 2018Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Byung Ok Kang, Ki Young Park, Jeon Gue Park, Hwa Jeon Song, Sung Joo Lee, Yun Keun Lee, Hyung Bae Jeon
-
Publication number: 20180047389Abstract: Provided are an apparatus and method for recognizing speech using an attention-based content-dependent (CD) acoustic model. The apparatus includes a predictive deep neural network (DNN) configured to receive input data from an input layer and output predictive values to a buffer of a first output layer, and a context DNN configured to receive a context window from the first output layer and output a final result value.Type: ApplicationFiled: January 12, 2017Publication date: February 15, 2018Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hwa Jeon SONG, Byung Ok KANG, Jeon Gue PARK, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG
-
Patent number: 9390426Abstract: Disclosed are a personalized advertisement device based on speech recognition SMS services and a personalized advertisement exposure method based on speech recognition SMS services. The present invention provides a personalized advertisement device based on speech recognition SMS services and a personalized advertisement exposure method based on speech recognition SMS services capable of maximizing an effect of advertisement by grasping user's intention, an emotion state, and positional information from speech data uttered by a user during a process of providing speech recognition SMS services, configuring advertisements from when speech data begins conversion to when it has been completely converted by the speech recognition into character strings, and exposing the configured advertisements to a user.Type: GrantFiled: September 5, 2012Date of Patent: July 12, 2016Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hoon Chung, Jeon Gue Park, Hyung Bae Jeon, Ki Young Park, Yun Keun Lee, Sang Kyu Park
-
Publication number: 20150221303Abstract: Provided are a discussion learning system enabling a discussion learning to proceed based on a speech recognition system without an instructor and a method using the same, the discussion learning system including an learning content providing server configured to provide a discussion environment, extract speeches of learners joining a discussion, and generate speech information based on the extracted speeches, and a speech recognition server configured to perform a speech recognition with respect to each of the learners based on the speech information, determine a progress of the discussion based on a result of the speech recognition, and provide the learning content providing server with interpretation information for smoothly continuing the discussion.Type: ApplicationFiled: January 13, 2015Publication date: August 6, 2015Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeom Ja KANG, Hyung Bae JEON, Yun Keun LEE, Ho Young JUNG
-
Publication number: 20140163986Abstract: Disclosed herein is a voice-based CAPTCHA method and apparatus which can perform a CAPTCHA procedure using the voice of a human being. In the voice-based CAPTCHA) method, a plurality of uttered sounds of a user are collected. A start point and an end point of a voice from each of the collected uttered sounds are detected and then speech sections are detected. Uttered sounds of the respective detected speech sections are compared with reference uttered sounds, and then it is determined whether the uttered sounds are correctly uttered sounds. It is determined whether the uttered sounds have been made by an identical speaker if it is determined that the uttered sounds are correctly uttered sounds.Type: ApplicationFiled: December 3, 2013Publication date: June 12, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Sung-Joo LEE, Ho-Young JUNG, Hwa-Jeon SONG, Eui-Sok CHUNG, Byung-Ok KANG, Hoon CHUNG, Jeon-Gue PARK, Hyung-Bae JEON, Yoo-Rhee OH, Yun-Keun LEE
-
Publication number: 20140129233Abstract: Disclosed is apparatus and system for user interface. The apparatus for user interface comprises a body unit including a groove which is corresponding to a structure of an oral cavity and operable to be mounted on upper part of the oral cavity; a user input unit receiving a signal from the user's tongue in a part of the body unit; a communication unit transmitting the signal received from the user input unit; and a charging unit supplying an electrical energy generated from vibration or pressure caused by movement of the user's tongue.Type: ApplicationFiled: March 29, 2013Publication date: May 8, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Eui Sok CHUNG, Yun Keun LEE, Hyung Bae JEON, Ho Young JUNG, Jeom Ja KANG
-
Patent number: 8666739Abstract: Method of the present invention may include receiving speech feature vector converted from speech signal, performing first search by applying first language model to the received speech feature vector, and outputting word lattice and first acoustic score of the word lattice as continuous speech recognition result, outputting second acoustic score as phoneme recognition result by applying an acoustic model to the speech feature vector, comparing the first acoustic score of the continuous speech recognition result with the second acoustic score of the phoneme recognition result, outputting first language model weight when the first coustic score of the continuous speech recognition result is better than the second acoustic score of the phoneme recognition result and performing a second search by applying a second language model weight, which is the same as the output first language model, to the word lattice.Type: GrantFiled: December 13, 2011Date of Patent: March 4, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Hyung Bae Jeon, Yun Keun Lee, Eui Sok Chung, Jong Jin Kim, Hoon Chung, Jeon Gue Park, Ho Young Jung, Byung Ok Kang, Ki Young Park, Sung Joo Lee, Jeom Ja Kang, Hwa Jeon Song
-
Publication number: 20130297314Abstract: Disclosed are a distributed environment rescoring method and apparatus. A distributed environment rescoring method in accordance with the present invention includes generating a word lattice by performing voice recognition on received voice, converting the word lattice into a word confusion network formed from the temporal connection of confusion sets clustered based on temporal redundancy and phoneme similarities, generating a list of subword confusion networks based on the entropy values of the respective confusion sets included in the word confusion network, and generating a modified word confusion network by modifying a list of the subword confusion networks through distributed environment rescoring.Type: ApplicationFiled: October 19, 2012Publication date: November 7, 2013Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Eui-Sok CHUNG, Hyung-Bae Jeon, Hwa-Jeon Song, Yun-Keun Lee
-
Patent number: 8504362Abstract: A speech recognition system includes: a speed level classifier for measuring a moving speed of a moving object by using a noise signal at an initial time of speech recognition to determine a speed level of the moving object; a first speech enhancement unit for enhancing sound quality of an input speech signal of the speech recognition by using a Wiener filter, if the speed level of the moving object is equal to or lower than a specific level; and a second speech enhancement unit enhancing the sound quality of the input speech signal by using a Gaussian mixture model, if the speed level of the moving object is higher than the specific level. The system further includes an end point detection unit for detecting start and end points, an elimination unit for eliminating sudden noise components based on a sudden noise Gaussian mixture model.Type: GrantFiled: July 21, 2009Date of Patent: August 6, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Sung Joo Lee, Ho-Young Jung, Jeon Gue Park, Hoon Chung, Yunkeun Lee, Byung Ok Kang, Hyung-Bae Jeon, Jong Jin Kim, Ki-young Park, Euisok Chung, Ji Hyun Wang, Jeom Ja Kang
-
Publication number: 20130117020Abstract: Disclosed are a personalized advertisement device based on speech recognition SMS services and a personalized advertisement exposure method based on speech recognition SMS services. The present invention provides a personalized advertisement device based on speech recognition SMS services and a personalized advertisement exposure method based on speech recognition SMS services capable of maximizing an effect of advertisement by grasping user's intention, an emotion state, and positional information from speech data uttered by a user during a process of providing speech recognition SMS services, configuring advertisements based thereon, and exposing the configured advertisements to a user.Type: ApplicationFiled: September 5, 2012Publication date: May 9, 2013Applicant: Electronics and telecommunications Research InstituteInventors: Hoon CHUNG, Jeon Gue Park, Hyung Bae Jeon, Ki Young Park, Yun Keun Lee, Sang Kyu Park
-
Patent number: 8374869Abstract: An utterance verification method for an isolated word N-best speech recognition result includes: calculating log likelihoods of a context-dependent phoneme and an anti-phoneme model based on an N-best speech recognition result for an input utterance; measuring a confidence score of an N-best speech-recognized word using the log likelihoods; calculating distance between phonemes for the N-best speech-recognized word; comparing the confidence score with a threshold and the distance with a predetermined mean of distances; and accepting the N-best speech-recognized word when the compared results for the confidence score and the distance correspond to acceptance.Type: GrantFiled: August 4, 2009Date of Patent: February 12, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Jeom Ja Kang, Yunkeun Lee, Jeon Gue Park, Ho-Young Jung, Hyung-Bae Jeon, Hoon Chung, Sung Joo Lee, Euisok Chung, Ji Hyun Wang, Byung Ok Kang, Ki-young Park, Jong Jin Kim
-
Patent number: 8364483Abstract: A method for separating a sound source from a mixed signal, includes Transforming a mixed signal to channel signals in frequency domain; and grouping several frequency bands for each channel signal to form frequency clusters. Further, the method for separating the sound source from the mixed signal includes separating the frequency clusters by applying a blind source separation to signals in frequency domain for each frequency cluster; and integrating the spectrums of the separated signal to restore the sound source in a time domain wherein each of the separated signals expresses one sound source.Type: GrantFiled: June 19, 2009Date of Patent: January 29, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Ki-young Park, Ho-Young Jung, Yun Keun Lee, Jeon Gue Park, Jeom Ja Kang, Hoon Chung, Sung Joo Lee, Byung Ok Kang, Ji Hyun Wang, Eui Sok Chung, Hyung-Bae Jeon, Jong Jin Kim
-
Publication number: 20130013297Abstract: A message service method using speech recognition includes a message server recognizing a speech transmitted from a transmission terminal, generating and transmitting a recognition result of the speech and N-best results based on a confusion network to the transmission terminal; if a message is selected through the recognition result and the N-best results and an evaluation result according to accuracy of the message are decided, the transmission terminal transmitting the message and the evaluation result to a reception terminal; and the reception terminal displaying the message and the evaluation result.Type: ApplicationFiled: July 5, 2012Publication date: January 10, 2013Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hwa Jeon SONG, YunKeun Lee, Jeon Gue Park, Jong Jin Kim, Ki-Young Park, Hoon Chung, Hyung-Bae Jeon, Ho Young Jung, Euisok Chung, Jeom Ja Kang, Byung Ok Kang, Sang Kyu Park, Sung Joo Lee, Yoo Rhee Oh
-
Patent number: 8332222Abstract: A Viterbi decoder includes: an observation vector sequence generator for generating an observation vector sequence by converting an input speech to a sequence of observation vectors; a local optimal state calculator for obtaining a partial state sequence having a maximum similarity up to a current observation vector as an optimal state; an observation probability calculator for obtaining, as a current observation probability, a probability for observing the current observation vector in the optimal state; a buffer for storing therein a specific number of previous observation probabilities; a non-linear filter for calculating a filtered probability by using the previous observation probabilities stored in the buffer and the current observation probability; and a maximum likelihood calculator for calculating a partial maximum likelihood by using the filtered probability.Type: GrantFiled: July 21, 2009Date of Patent: December 11, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Hoon Chung, Jeon Gue Park, Yunkeun Lee, Ho-Young Jung, Hyung-Bae Jeon, Jeom Ja Kang, Sung Joo Lee, Euisok Chung, Ji Hyun Wang, Byung Ok Kang, Ki-young Park, Jong Jin Kim
-
Patent number: 8296135Abstract: A noise cancellation apparatus includes a noise estimation module for receiving a noise-containing input speech, and estimating a noise therefrom to output the estimated noise; a first Wiener filter module for receiving the input speech, and applying a first Wiener filter thereto to output a first estimation of clean speech; a database for storing data of a Gaussian mixture model for modeling clean speech; and an MMSE estimation module for receiving the first estimation of clean speech and the data of the Gaussian mixture model to output a second estimation of clean speech. The apparatus further includes a final clean speech estimation module for receiving the second estimation of clean speech from the MMSE estimation module and the estimated noise from the noise estimation module, and obtaining a final Wiener filter gain therefrom to output a final estimation of clean speech by applying the final Wiener filter gain.Type: GrantFiled: November 13, 2008Date of Patent: October 23, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Byung Ok Kang, Ho-Young Jung, Sung Joo Lee, Yunkeun Lee, Jeon Gue Park, Jeom Ja Kang, Hoon Chung, Euisok Chung, Ji Hyun Wang, Hyung-Bae Jeon
-
Patent number: 8249867Abstract: A microphone-array-based speech recognition system using a blind source separation (BBS) and a target speech extraction method in the system are provided. The speech recognition system performs an independent component analysis (ICA) to separate mixed signals input through a plurality of microphone into sound-source signals, extracts one target speech spoken for speech recognition from the separated sound-source signals by using a Gaussian mixture model (GMM) or a hidden Markov Model (HMM), and automatically recognizes a desired speech from the extracted target speech. Accordingly, it is possible to obtain a high speech recognition rate even in a noise environment.Type: GrantFiled: September 30, 2008Date of Patent: August 21, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Hoon Young Cho, Yun Keun Lee, Jeom Ja Kang, Byung Ok Kang, Kap Kee Kim, Sung Joo Lee, Ho Young Jung, Hoon Chung, Jeon Gue Park, Hyung Bae Jeon