Patents by Inventor Jeong Se Kim

Jeong Se Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11963439
    Abstract: The present disclosure relates to an organic electroluminescent compound and an organic electroluminescent device comprising the same. By comprising the compound according to the present disclosure, it is possible to produce an organic electroluminescent device having improved driving voltage, power efficiency, and/or lifetime properties compared to the conventional organic electroluminescent devices.
    Type: Grant
    Filed: August 24, 2022
    Date of Patent: April 16, 2024
    Assignee: Rohm and Haas Electronic Materials Korea Ltd.
    Inventors: Eun-Joung Choi, Young-Kwang Kim, Su-Hyun Lee, So-Young Jung, YeJin Jeon, Hong-Se Oh, Dong-Hyung Lee, Jin-Man Kim, Hyun-Woo Kang, Mi-Ja Lee, Hee-Ryong Kang, Hyo-Nim Shin, Jeong-Hwan Jeon, Sang-Hee Cho
  • Patent number: 10108606
    Abstract: Provided are an automatic interpretation system and method for generating a synthetic sound having characteristics similar to those of an original speaker's voice. The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker's voice includes a speech recognition module configured to generate text data by performing speech recognition for an original speech signal of an original speaker and extract at least one piece of characteristic information among pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech, an automatic translation module configured to generate a synthesis-target translation by translating the text data, and a speech synthesis module configured to generate a synthetic sound of the synthesis-target translation.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: October 23, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung Yun, Ki Hyun Kim, Sang Hun Kim, Yun Young Kim, Jeong Se Kim, Min Kyu Lee, Soo Jong Lee, Young Jik Lee, Mu Yeol Choi
  • Publication number: 20170255616
    Abstract: Provided are an automatic interpretation system and method for generating a synthetic sound having characteristics similar to those of an original speaker's voice. The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker's voice includes a speech recognition module configured to generate text data by performing speech recognition for an original speech signal of an original speaker and extract at least one piece of characteristic information among pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech, an automatic translation module configured to generate a synthesis-target translation by translating the text data, and a speech synthesis module configured to generate a synthetic sound of the synthesis-target translation.
    Type: Application
    Filed: July 19, 2016
    Publication date: September 7, 2017
    Inventors: Seung YUN, Ki Hyun KIM, Sang Hun KIM, Yun Young KIM, Jeong Se KIM, Min Kyu LEE, Soo Jong LEE, Young Jik LEE, Mu Yeol CHOI
  • Publication number: 20170147558
    Abstract: Provided is a method for interpretation and translation accomplished by an interpretation and translation apparatus of a user through interfacing with an interpretation and translation apparatus of the other party. The method includes: automatically setting a translation target language which enables to communicate with the other party based on a message from the interpretation and translation apparatus of the other party by using a communication connection in a network; receiving input information of a use language of the user; calling a translator corresponding to the translation target language to transmit a result obtained by translating the input information into the translation target language to the interpretation and translation apparatus of the other party; and outputting received data from the interpretation and translation apparatus of the other party or outputting the result obtained by translating the received data into the use language of the user by using the translator.
    Type: Application
    Filed: June 24, 2016
    Publication date: May 25, 2017
    Inventors: Jeong Se KIM, Sang Hun KIM, Seung YUN
  • Patent number: 9292499
    Abstract: The present invention relates to an automatic translation and interpretation apparatus and method. The apparatus includes a speech input unit for receiving a speech signal in a first language. A text input unit receives text in the first language. A sentence recognition unit recognizes a sentence in the first language desired to be translated by extracting speech features from the speech signal received from the speech input unit or measuring a similarity of each word of the text received from the text input unit. A translation unit translates the recognized sentence in the first language into a sentence in a second language. A speech output unit outputs uttered sound of the translated sentence in the second language in speech. A text output unit converts the uttered sound of the translated sentence in the second language into text transcribed in the first language and outputs the text.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: March 22, 2016
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Soo-Jong Lee, Sang-Hun Kim, Jeong-Se Kim, Seung Yun, Min-Kyu Lee, Sang-Kyu Park
  • Publication number: 20150073796
    Abstract: Disclosed herein are an apparatus and a method of generating a language model for speech recognition. The present invention is to provide an apparatus of generating a language model capable of improving speech recognition performance by predicting a position at which break is present and reflecting the predicted break information.
    Type: Application
    Filed: April 2, 2014
    Publication date: March 12, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeong-Se Kim, Sang-Hun Kim
  • Publication number: 20140303957
    Abstract: The present invention relates to an automatic translation and interpretation apparatus and method. The apparatus includes a speech input unit for receiving a speech signal in a first language. A text input unit receives text in the first language. A sentence recognition unit recognizes a sentence in the first language desired to be translated by extracting speech features from the speech signal received from the speech input unit or measuring a similarity of each word of the text received from the text input unit. A translation unit translates the recognized sentence in the first language into a sentence in a second language. A speech output unit outputs uttered sound of the translated sentence in the second language in speech. A text output unit converts the uttered sound of the translated sentence in the second language into text transcribed in the first language and outputs the text.
    Type: Application
    Filed: January 22, 2014
    Publication date: October 9, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Soo-Jong LEE, Sang-Hun KIM, Jeong-Se KIM, Seung YUN, Min-Kyu LEE, Sang-Kyu PARK
  • Publication number: 20140195226
    Abstract: A method of correcting errors in a speech recognition system includes a process of searching a speech recognition error-answer pair DB based on a sound model for a first candidate answer group for a speech recognition error, a process of searching a word relationship information DB for a second candidate answer group for the speech recognition error, a process of searching a user error correction information DB for a third candidate answer group for the speech recognition error, a process of searching a domain articulation pattern DB and a proper noun DB for a fourth candidate answer group for the speech recognition error, and a process of aligning candidate answers within each of the retrieved candidate answer groups and displaying the aligned candidate answers.
    Type: Application
    Filed: May 24, 2013
    Publication date: July 10, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Seung YUN, Sanghun KIM, Jeong Se KIM, Soo-jong LEE, Ki Hyun KIM
  • Patent number: 8504359
    Abstract: A speech recognition method using a domain ontology includes: constructing domain ontology DB; forming a speech recognition grammar using the formed domain ontology DB; extracting a feature vector from a speech signal; modeling the speech signal using an acoustic model. The method performs speech recognition by using the acoustic model, the speech recognition dictionary and the speech recognition grammar on the basis of the feature vector.
    Type: Grant
    Filed: September 1, 2009
    Date of Patent: August 6, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung Yun, Soo Jong Lee, Jeong Se Kim, Il Bin Lee, Jun Park, Sang Kyu Park
  • Publication number: 20130103382
    Abstract: An apparatus for searching similar sentences that has a translation sentence database includes an input unit to which a sentence is input; first language processing unit configured to perform language processing on sentences input through the input unit; and first language similarity calculating unit configured to refer to previously translated sentences to extract similar sentences for the first language sentence. Further, the apparatus includes translating unit configured to translate a sentence into a second language sentence; second language processing unit configured to perform language processing on a second language sentence; second language similarity calculating unit configured to refer to the previously translated sentences to extract similar sentences for the second language sentence; and a re-ranking unit configured to combine similar sentence extracting results of the first language with those of the second language to re-rank sentence outputs.
    Type: Application
    Filed: August 29, 2012
    Publication date: April 25, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeong Se Kim, Sanghun Kim, Soo-jong Lee, Ji Hyun Wang, Seung Yun
  • Patent number: 8370130
    Abstract: A speech understanding apparatus includes: a speech recognition unit for recognizing an input speech to produce a speech recognition result; a sentence analysis unit for performing morpheme analysis on a sentence corresponding to the speech recognition result, extracting additional information, and performing syntax analysis; a hierarchy describing unit for describing hierarchy of the sentence; a class transformation unit for performing class transformation on the sentence; a semantic representation determination unit for marking optional expressions for the sentence, deleting meaningless expressions and the additional information, converting the sentence into its base form, and deleting morphemic tags or symbols to determine a semantic representation; a semantic representation retrieval unit for retrieving the determined semantic representation from an example-based semantic representation pattern database; and a retrieval result processing unit for selectively producing a retrieved semantic representation.
    Type: Grant
    Filed: November 19, 2009
    Date of Patent: February 5, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung Yun, Seung Hi Kim, Jun Park, Jeong Se Kim, Ilbin Lee, Soo Jong Lee, Sanghun Kim, Sang Kyu Park
  • Publication number: 20120010873
    Abstract: Disclosed herein are a sentence translation apparatus and method. The sentence translation apparatus includes a voice recognition unit, a morphemic part-of-speech tagging unit, a pause extraction unit, and a sentence separation unit. The voice recognition unit creates a sentence in a first language based on results of recognition of a voice in a first language. The morphemic part-of-speech tagging unit tags morphemic parts of speech from the sentence in the first language. The pause extraction unit extracts pause information from the voice in the first language. The sentence separation unit separates the sentence in the first language based on information about the morphemic parts of speech tagged by the morphemic part-of-speech tagging unit and the pause information extracted by the pause extraction unit.
    Type: Application
    Filed: July 5, 2011
    Publication date: January 12, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeong-Se Kim, Sang-Hun Kim, Seung Yun, Soo-Jong Lee, Sang-Kyu Park
  • Publication number: 20110153309
    Abstract: Provided is an automatic interpretation apparatus including a voice recognizing unit, a language processing unit, a similarity calculating unit, a sentence translating unit, and a voice synthesizing unit. The voice recognizing unit receives a first-language voice and generates a first-language sentence through a voice recognition operation. The language processing unit extracts elements included in the first-language sentence. The similarity calculating unit compares the extracted elements with elements included in a translated sentence stored in a translated sentence database and calculates the similarity between the first-language sentence and the translated sentence on the basis of the comparison result. The sentence translating unit translates the first-language sentence into a second-language sentence with reference to the translated sentence database according to the calculated similarity.
    Type: Application
    Filed: December 15, 2010
    Publication date: June 23, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeong Se KIM, Sang Hun KIM, Seung YUN, Chang Hyun KIM
  • Publication number: 20110054883
    Abstract: A speech understanding apparatus includes: a speech recognition unit for recognizing an input speech to produce a speech recognition result; a sentence analysis unit for performing morpheme analysis on a sentence corresponding to the speech recognition result, extracting additional information, and performing syntax analysis; a hierarchy describing unit for describing hierarchy of the sentence; a class transformation unit for performing class transformation on the sentence; a semantic representation determination unit for marking optional expressions for the sentence, deleting meaningless expressions and the additional information, converting the sentence into its base form, and deleting morphemic tags or symbols to determine a semantic representation; a semantic representation retrieval unit for retrieving the determined semantic representation from an example-based semantic representation pattern database; and a retrieval result processing unit for selectively producing a retrieved semantic representation.
    Type: Application
    Filed: November 19, 2009
    Publication date: March 3, 2011
    Inventors: Seung Yun, Seung Hi Kim, Jun Park, Jeong Se Kim, Ilbin Lee, Soo Jong Lee, Sanghun Kim, Sang Kyu Park
  • Publication number: 20100145680
    Abstract: A speech recognition method using a domain ontology includes: constructing domain ontology DB; forming a speech recognition grammar using the formed domain ontology DB; extracting a feature vector from a speech signal; modeling the speech signal using an acoustic model. The method performs speech recognition by using the acoustic model, the speech recognition dictionary and the speech recognition grammar on the basis of the feature vector.
    Type: Application
    Filed: September 1, 2009
    Publication date: June 10, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Seung YUN, Soo Jong Lee, Jeong Se Kim, Il Bin Lee, Jun Park, Sang Kyu Park
  • Patent number: 6322482
    Abstract: A kick training belt for use in martial arts training exercises includes an adjustable elastic body, binding tools at both ends thereof, and air tubes in the binding tools which are inflated upon securing the belt to a trainee's limbs, so as to cushion and protect the limbs from accidental injury and prevent the elastic body from constricting blood flow to the limbs. The binding tools are secured to the trainee's limbs via hook and loop fasteners, which can also be used to connect two or more kick training belts end-to-end. Auxiliary strips having complementary hook and loop fasteners engage the kick training belt, and can be used to removeably attach auxiliary exercise equipment thereto, including knee pads, sand bag weights and athletic handles. A counting device engages the kick training belt to record the number of exercise repetitions a trainee has completed during a session.
    Type: Grant
    Filed: June 14, 2000
    Date of Patent: November 27, 2001
    Inventor: Jeong Se Kim