Patents by Inventor Hyung Bae Jeon

Hyung Bae Jeon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8219396
    Abstract: An apparatus for evaluating the performance of speech recognition includes a speech database for storing N-number of test speech signals for evaluation. A speech recognizer is located in an actual environment and executes the speech recognition of the test speech signals reproduced using a loud speaker from the speech database in the actual environment to produce speech recognition results. A performance evaluation module evaluates the performance of the speech recognition by comparing correct recognition results answers with the speech recognition results.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: July 10, 2012
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hoon-Young Cho, Yunkeun Lee, Ho-Young Jung, Byung Ok Kang, Jeom Ja Kang, Kap Kee Kim, Sung Joo Lee, Hoon Chung, Jeon Gue Park, Hyung-Bae Jeon
  • Publication number: 20120150539
    Abstract: Method of the present invention may include receiving speech feature vector converted from speech signal, performing first search by applying first language model to the received speech feature vector, and outputting word lattice and first acoustic score of the word lattice as continuous speech recognition result, outputting second acoustic score as phoneme recognition result by applying an acoustic model to the speech feature vector, comparing the first acoustic score of the continuous speech recognition result with the second acoustic score of the phoneme recognition result, outputting first language model weight when the first coustic score of the continuous speech recognition result is better than the second acoustic score of the phoneme recognition result and performing a second search by applying a second language model weight, which is the same as the output first language model, to the word lattice.
    Type: Application
    Filed: December 13, 2011
    Publication date: June 14, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyung Bae Jeon, Yun Keun Lee, Eui Sok Chung, Jong Jin Kim, Hoon Chung, Jeon Gue Park, Ho Young Jung, Byung Ok Kang, Ki Young Park, Sung Joo Lee, Jeom Ja Kang, Hwa Jeon Song
  • Patent number: 8032374
    Abstract: Provided are an apparatus and method for recognizing continuous speech using search space restriction based on phoneme recognition. In the apparatus and method, a search space can be primarily reduced by restricting connection words to be shifted at a boundary between words based on the phoneme recognition result. In addition, the search space can be secondarily reduced by rapidly calculating a degree of similarity between the connection word to be shifted and the phoneme recognition result using a phoneme code and shifting the corresponding phonemes to only connection words having degrees of similarity equal to or higher than a predetermined reference value. Therefore, the speed and performance of the speech recognition process can be improved in various speech recognition services.
    Type: Grant
    Filed: December 4, 2007
    Date of Patent: October 4, 2011
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hyung Bae Jeon, Jun Park, Seung Hi Kim, Kyu Woong Hwang
  • Patent number: 8015016
    Abstract: Provided are an automatic speech translation system and a method for obtaining accurate translation performance with a simple structure. Because input and output sentences are written in different languages, automatic speech translation requires techniques for processing different languages. Repetition of text processing like morpheme analysis or sentence parsing in conventional automatic speech translation can complicate the overall translation process. Meanwhile, although input and output sentences are written in different languages, they have to have the same meaning and a corresponding sentence form and words. Accordingly, the corresponding words and sentence forms of the two languages can be expressed with a simple structure and utilized in the automatic speech translation process, thereby maintaining consistency during the process and avoiding unnecessary process repetition, which reduces errors and improves performance.
    Type: Grant
    Filed: October 25, 2007
    Date of Patent: September 6, 2011
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jun Park, Seung Hi Kim, Hyung Bae Jeon, Young Jik Lee, Hoon Chung
  • Publication number: 20100161334
    Abstract: An utterance verification method for an isolated word N-best speech recognition result includes: calculating log likelihoods of a context-dependent phoneme and an anti-phoneme model based on an N-best speech recognition result for an input utterance; measuring a confidence score of an N-best speech-recognized word using the log likelihoods; calculating distance between phonemes for the N-best speech-recognized word; comparing the confidence score with a threshold and the distance with a predetermined mean of distances; and accepting the N-best speech-recognized word when the compared results for the confidence score and the distance correspond to acceptance.
    Type: Application
    Filed: August 4, 2009
    Publication date: June 24, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeom Ja Kang, Yunkeun Lee, Jeon Gue Park, Ho-Young Jung, Hyung-Bae Jeon, Hoon Chung, Sung Joo Lee, Euisok Chung, Ji Hyun Wang, Byung Ok Kang, Ki-young Park, Jong Jin Kim
  • Publication number: 20100161329
    Abstract: A Viterbi decoder includes: an observation vector sequence generator for generating an observation vector sequence by converting an input speech to a sequence of observation vectors; a local optimal state calculator for obtaining a partial state sequence having a maximum similarity up to a current observation vector as an optimal state; an observation probability calculator for obtaining, as a current observation probability, a probability for observing the current observation vector in the optimal state; a buffer for storing therein a specific number of previous observation probabilities; a non-linear filter for calculating a filtered probability by using the previous observation probabilities stored in the buffer and the current observation probability; and a maximum likelihood calculator for calculating a partial maximum likelihood by using the filtered probability.
    Type: Application
    Filed: July 21, 2009
    Publication date: June 24, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon CHUNG, Jeon Gue PARK, Yunkeun LEE, Ho-Young JUNG, Hyung-Bae JEON, Jeorn Ja KANG, Sung Joo LEE, Euisok CHUNG, Ji Hyun WANG, Byung Ok KANG, Ki-young PARK, Jong Jin KIM
  • Publication number: 20100161326
    Abstract: A speech recognition system includes: a speed level classifier for measuring a moving speed of a moving object by using a noise signal at an initial time of speech recognition to determine a speed level of the moving object; a first speech enhancement unit for enhancing sound quality of an input speech signal of the speech recognition by using a Wiener filter, if the speed level of the moving object is equal to or lower than a specific level; and a second speech enhancement unit enhancing the sound quality of the input speech signal by using a Gaussian mixture model, if the speed level of the moving object is higher than the specific level. The system further includes an end point detection unit for detecting start and end points, an elimination unit for eliminating sudden noise components based on a sudden noise Gaussian mixture model.
    Type: Application
    Filed: July 21, 2009
    Publication date: June 24, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung Joo Lee, Ho-Young Jung, Jeon Gue Park, Hoon Chung, Yunkeun Lee, Byung Ok Kang, Hyung-Bae Jeon, Jong Jin Kim, Ki-young Park, Euisok Chung, Ji Hyun Wang, Jeom Ja Kang
  • Publication number: 20100158271
    Abstract: A method for separating a sound source from a mixed signal, includes Transforming a mixed signal to channel signals in frequency domain; and grouping several frequency bands for each channel signal to form frequency clusters. Further, the method for separating the sound source from the mixed signal includes separating the frequency clusters by applying a blind source separation to signals in frequency domain for each frequency cluster; and integrating the spectrums of the separated signal to restore the sound source in a time domain wherein each of the separated signals expresses one sound source.
    Type: Application
    Filed: June 19, 2009
    Publication date: June 24, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ki-young Park, Ho-Young Jung, Yun Keun Lee, Jeon Gue Park, Jeom Ja Kang, Hoon Chung, Sung Joo Lee, Byung Ok Kang, Ji Hyun Wang, Eui Sok Chung, Hyung-Bae Jeon, Jong Jin Kim
  • Publication number: 20100154015
    Abstract: A metadata search apparatus using speech recognition includes a metadata processor for processing contents metadata to obtain allomorph of target vocabulary required for speech recognition and search; a metadata storage unit for storing the contents metadata; a speech recognizer for performing speech recognition on speech data uttered by a user by searching the allomorph of the target vocabulary; a query language processor for extracting a keyword from the vocabulary speech-recognized by the speech recognizer; and a search processor for searching the metadata storage unit to extract the contents metadata corresponding to the keyword. An IPTV receiving apparatus employs the metadata search apparatus to provide IPTV services through the functions of speech recognition.
    Type: Application
    Filed: May 7, 2009
    Publication date: June 17, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Byung Ok KANG, Eui Sok CHUNG, Ji Hyun WANG, Yun Keun LEE, Jeom Ja KANG, Jong Jin KIM, Ki-young PARK, Jeon Gue PARK, Sung Joo LEE, Hyung-Bae JEON, Ho-Young JUNG, Hoon CHUNG
  • Publication number: 20100070274
    Abstract: An apparatus for a speech recognition based on source separation and identification includes: a sound source separator for separating mixed signals, which are input to two or more microphones, into sound source signals by using independent component analysis (ICA), and estimating direction information of the separated sound source signals; and a speech recognizer for calculating normalized log likelihood probabilities of the separated sound source signals. The apparatus further includes a speech signal identifier identifying a sound source corresponding to a user's speech signal by using both of the estimated direction information and the reliability information based on the normalized log likelihood probabilities.
    Type: Application
    Filed: July 7, 2009
    Publication date: March 18, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon-Young CHO, Sang Kyu Park, Jun Park, Seung Hi Kim, Ilbin Lee, Kyuwoong Hwang, Hyung-Bae Jeon, Yunkeun Lee
  • Publication number: 20090265168
    Abstract: A noise cancellation apparatus includes a noise estimation module for receiving a noise-containing input speech, and estimating a noise therefrom to output the estimated noise; a first Wiener filter module for receiving the input speech, and applying a first Wiener filter thereto to output a first estimation of clean speech; a database for storing data of a Gaussian mixture model for modeling clean speech; and an MMSE estimation module for receiving the first estimation of clean speech and the data of the Gaussian mixture model to output a second estimation of clean speech. The apparatus further includes a final clean speech estimation module for receiving the second estimation of clean speech from the MMSE estimation module and the estimated noise from the noise estimation module, and obtaining a final Wiener filter gain therefrom to output a final estimation of clean speech by applying the final Wiener filter gain.
    Type: Application
    Filed: November 13, 2008
    Publication date: October 22, 2009
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Byung Ok Kang, Ho-Young Jung, Sung Joo Lee, Yunkeun Lee, Jeon Gue Park, Jeom Ja Kang, Hoon Chung, Euisok Chung, Ji Hyun Wang, Hyung-Bae Jeon
  • Publication number: 20090157399
    Abstract: An apparatus for evaluating the performance of speech recognition includes a speech database for storing N-number of test speech signals for evaluation. A speech recognizer is located in an actual environment and executes the speech recognition of the test speech signals reproduced using a loud speaker from the speech database in the actual environment to produce speech recognition results. A performance evaluation module evaluates the performance of the speech recognition by comparing correct recognition results answers with the speech recognition results.
    Type: Application
    Filed: December 16, 2008
    Publication date: June 18, 2009
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hoon-Young CHO, Yunkeun Lee, Ho-Young Jung, Byung Ok Kang, Jeom Ja Kang, Kap Kee Kim, Sung Joo Lee, Hoon Chung, Jeon Gue Park, Hyung-Bae Jeon
  • Publication number: 20090150146
    Abstract: A microphone-array-based speech recognition system using a blind source separation (BBS) and a target speech extraction method in the system are provided. The speech recognition system performs an independent component analysis (ICA) to separate mixed signals input through a plurality of microphone into sound-source signals, extracts one target speech spoken for speech recognition from the separated sound-source signals by using a Gaussian mixture model (GMM) or a hidden Markov Model (HMM), and automatically recognizes a desired speech from the extracted target speech. Accordingly, it is possible to obtain a high speech recognition rate even in a noise environment.
    Type: Application
    Filed: September 30, 2008
    Publication date: June 11, 2009
    Applicant: ELECTRONICS & TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hoon Young CHO, Yun Keun Lee, Jeom Ja Kang, Byung Ok Kang, Kap Kee Kim, Sung Joo Lee, Ho Young Jung, Hoon Chung, Jeon Gue Park, Hyung Bae Jeon
  • Publication number: 20090076817
    Abstract: Provided are an apparatus and method for recognizing speech, in which reliability with respect to phoneme-recognized phoneme sequences is calculated and performance of speech recognition is enhanced using the calculated results. The method of recognizing speech includes the steps of: determining a boundary between phonemes included in character sequences that are phonetically input to detect each phoneme interval; calculating reliability according to a probability that a phoneme indicated by the detected phoneme interval corresponds to a phoneme included in a predefined phoneme model; calculating a phoneme alignment cost with respect to the character sequences based on the calculated reliability and a pre-trained and stored phoneme recognition probability distribution; and performing phoneme alignment based on the calculated phoneme alignment cost to perform speech recognition on the input character sequences.
    Type: Application
    Filed: March 13, 2008
    Publication date: March 19, 2009
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyung Bae JEON, Kyu Woong HWANG, Seung Hi KIM, Hoon CHUNG, Jun PARK, Yun Keun LEE
  • Publication number: 20080133239
    Abstract: Provided are an apparatus and method for recognizing continuous speech using search space restriction based on phoneme recognition. In the apparatus and method, a search space can be primarily reduced by restricting connection words to be shifted at a boundary between words based on the phoneme recognition result. In addition, the search space can be secondarily reduced by rapidly calculating a degree of similarity between the connection word to be shifted and the phoneme recognition result using a phoneme code and shifting the corresponding phonemes to only connection words having degrees of similarity equal to or higher than a predetermined reference value. Therefore, the speed and performance of the speech recognition process can be improved in various speech recognition services.
    Type: Application
    Filed: December 4, 2007
    Publication date: June 5, 2008
    Inventors: Hyung Bae Jeon, Jun Park, Seung Hi Kim, Kyu Woong Hwang
  • Publication number: 20080109228
    Abstract: Provided are an automatic speech translation system and a method for obtaining accurate translation performance with a simple structure. Because input and output sentences are written in different languages, automatic speech translation requires techniques for processing different languages. Repetition of text processing like morpheme analysis or sentence parsing in conventional automatic speech translation can complicate the overall translation process. Meanwhile, although input and output sentences are written in different languages, they have to have the same meaning and a corresponding sentence form and words. Accordingly, the corresponding words and sentence forms of the two languages can be expressed with a simple structure and utilized in the automatic speech translation process, thereby maintaining consistency during the process and avoiding unnecessary process repetition, which reduces errors and improves performance.
    Type: Application
    Filed: October 25, 2007
    Publication date: May 8, 2008
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jun PARK, Seung Hi KIM, Hyung Bae JEON, Young Jik LEE, Hoon CHUNG
  • Publication number: 20030097261
    Abstract: A speech detection apparatus using basis functions, which are trained by independent component analysis (ICA) and method thereof are provided. The speech detection method includes the steps of training basis functions of speech signals and basis functions of noise signals according to a predetermined learning rule, adapting the basis functions of noise signals to the present environment by using the characteristic of noise signals, which are input into a mike, extracting determination information for detection speech activation from the basis functions of speech signals and the basis functions of noise signals, and detecting a speech starting point and a speech ending point of mike signals, which are come into a speech recognition unit, from the determination information.
    Type: Application
    Filed: February 11, 2002
    Publication date: May 22, 2003
    Inventors: Hyung-Bae Jeon, Ho-Young Jung