Patents by Inventor Yoon-Hyung Roh

Yoon-Hyung Roh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240127710
    Abstract: Disclosed are a system and method for automatically evaluating an essay. The system includes a structure analysis module configured to divide learning data and learner essay text in a predetermined structure analysis unit, generate structure tagging information for each structure analysis unit, and structure the learning data and the learner essay text by attaching the structure tagging information to the learning data and the learner essay text, a learning module configured to generate an essay evaluation model through learning by using essay text that is included in the structured learning data and the structure tagging information as an input value and using an evaluation score that is included in the structured learning data as a label, and an evaluation module configured to generate essay evaluation results using the essay evaluation model.
    Type: Application
    Filed: April 18, 2023
    Publication date: April 18, 2024
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Minsoo CHO, Oh Woog KWON, Yoon-Hyung ROH, Ki Young LEE, Yo Han LEE, Sung Kwon CHOI, Jinxia HUANG
  • Patent number: 11301625
    Abstract: A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: April 12, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yoon Hyung Roh, Jong Hun Shin, Young Ae Seo
  • Publication number: 20200159822
    Abstract: A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 21, 2020
    Inventors: Yoon Hyung ROH, Jong Hun SHIN, Young Ae SEO
  • Patent number: 9618352
    Abstract: An apparatus and method for controlling a navigator are disclosed herein. The apparatus includes a natural voice command acquisition unit, an information acquisition unit, a speech language understanding unit, a related information extraction unit, and a dialog management control unit. The natural voice command acquisition unit obtains a natural voice command from a user. The information acquisition unit obtains vehicle data including information about the operation of the navigator and information about the state of the vehicle. The speech language understanding unit converts the natural voice command into a user intention that can be understood by a computer. The related information extraction unit extracts related information that corresponds to the user intention. The dialog management control unit generates a response to the natural voice command based on the related information, the user intention and a dialog history, and controls the navigator in accordance with the conversation response.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: April 11, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Oh-Woog Kwon, Young-Kil Kim, Chang-Hyun Kim, Seung-Hoon Na, Yoon-Hyung Roh, Young-Ae Seo, Ki-Young Lee, Sang-Keun Jung, Sung-Kwon Choi, Yun Jin, Eun-Jin Park, Jong-Hun Shin, Jinxia Huang
  • Patent number: 9230544
    Abstract: The present invention relates to a spoken dialog system and method based on dual dialog management using a hierarchical dialog task library that may increase reutilization of dialog knowledge by constructing and packaging the dialog knowledge based on a task unit having a hierarchical structure, and may construct and process the dialog knowledge using a dialog plan scheme about relationship therebetween by classifying the dialog knowledge based on a task unit to make design of a dialog service convenient, which is different from an existing spoken dialog system in which it is difficult to reuse dialog knowledge since a large amount of construction costs and time is required.
    Type: Grant
    Filed: March 25, 2013
    Date of Patent: January 5, 2016
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Oh Woog Kwon, Yoon Hyung Roh, Seong Il Yang, Ki Young Lee, Sang Keun Jung, Sung Kwon Choi, Eun Jin Park, Jinxia Huang, Young Kil Kim, Chang Hyun Kim, Seung Hoon Na, Young Ae Seo, Yun Jin, Jong Hun Shin, Sang Kyu Park
  • Publication number: 20150276424
    Abstract: An apparatus and method for controlling a navigator are disclosed herein. The apparatus includes a natural voice command acquisition unit, an information acquisition unit, a speech language understanding unit, a related information extraction unit, and a dialogue management control unit. The natural voice command acquisition unit obtains a natural voice command from a user. The information acquisition unit obtains vehicle data including information about the operation of the navigator and information about the state of the vehicle. The speech language understanding unit converts the natural voice command into a user intention that can be understood by a computer. The related information extraction unit extracts related information that corresponds to the user intention. The dialogue management control unit generates a response to the natural voice command based on the related information, the user intention and a dialogue history, and controls the navigator in accordance with the conversation response.
    Type: Application
    Filed: March 27, 2015
    Publication date: October 1, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Oh-Woog KWON, Young-Kil Kim, Chang-Hyun Kim, Seung-Hoon Na, Yoon-Hyung Roh, Young-Ae Seo, Ki-Young Lee, Sang-Keun Jung, Sung-Kwon Choi, Yun Jin, Eun-Jin Park, Jong-Hun Shin, Jinxia Huang
  • Publication number: 20150227510
    Abstract: The present invention relates to a translation function and discloses an automatic translation operating device, including: at least one of voice input devices which collects voice signals input by a plurality of speakers and a communication module which receives the voice signals; and a control unit which controls to classify voice signals by speakers from the voice signals and cluster the speaker based voice signals classified in accordance with a predefined condition and then perform voice recognition and translation and a method thereof and a system including the same.
    Type: Application
    Filed: January 28, 2015
    Publication date: August 13, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong Hun SHIN, Ki Young LEE, Young Ae SEO, Jin Xia HUANG, Sung Kwon CHOI, Yun JIN, Chang Hyun KIM, Seung Hoon NA, Yoon Hyung ROH, Oh Woog KWON, Sang Keun JUNG, Eun Jin PARK, Kang Il KIM, Young Kil KIM, Sang Kyu PARK
  • Publication number: 20150199340
    Abstract: The present invention relates to a system for translating a language based on a user's reaction, the system includes an interface unit which inputs uttered sentences of the first user and the second user and outputs the translated result; a translating unit which translates the uttered sentences of the first user and the second user; a conversation intention recognizing unit which figures out a conversation intention of the second user from the reply of the second user for a translation result of the utterance of the first user; a translation result evaluating unit which evaluates the translation of the uttered sentence of the first user based on the conversation intention of the second user which is determined by the conversation intention recognizing unit; and a translation result evaluation storing unit which stores the translation result and an evaluation of the translation result.
    Type: Application
    Filed: January 12, 2015
    Publication date: July 16, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Oh Woog KWON, Young Kil KIM, Chang Hyun KIM, Seung Hoon NA, Yoon Hyung ROH, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Yun JIN, Eun Jin PARK, Jong Hun SHIN, Jin Xia HUANG, Kang Il KIM, Young Ae SEO, Sang Kyu PARK
  • Publication number: 20150193410
    Abstract: The present invention relates to a system for editing a text of a portable terminal and a method thereof, and more particularly to a technology which edits a text which is input into a portable terminal based on a touch interface. An exemplary embodiment of the present invention provides a text editing system of a portable terminal, including: an interface unit which inputs or outputs a text or voice; a text generating unit which generates the input text or voice as a text; a control unit which provides a keyboard based editing screen or a character recognition based editing screen for the generated text through the interface unit; and a text editing unit which performs an editing command which is input from a user through the keyboard based editing screen or the character recognition based editing screen under the control of the control unit.
    Type: Application
    Filed: September 12, 2014
    Publication date: July 9, 2015
    Inventors: Yun JIN, Chang Hyun KIM, Young Ae SEO, Jin Xia HUANG, Oh Woog KWON, Seung Hoon NA, Yoon Hyung ROH, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Jong Hun SHIN, Eun Jin PARK, Kang Il KIM, Young Kil KIM, Sang Kyu PARK
  • Patent number: 9058322
    Abstract: The present invention relates to an apparatus and method for providing a two-way automatic interpretation and translation service. The apparatus includes a first interpretation and translation unit for interpreting and translating a first language into a second language. A second interpretation and translation unit interprets and translates the second language into the first language. A context information management unit receives conversational context and translation history information, and shares and manages the conversational context and translation history information.
    Type: Grant
    Filed: April 16, 2013
    Date of Patent: June 16, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Oh-Woog Kwon, Ki-Young Lee, Sung-Kwon Choi, Yoon-Hyung Roh, Yun Jin, Eun-Jin Park, Young-Kil Kim, Sang-Kyu Park
  • Publication number: 20150127361
    Abstract: Disclosed are an automatic translation apparatus and method capable of optimizing limited translation knowledge in a database mounted in a portable mobile communication terminal, obtaining translation knowledge from external servers in order to provide translation knowledge appropriate for respective users, and effectively updating the database mounted in the terminal.
    Type: Application
    Filed: October 2, 2014
    Publication date: May 7, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong-Hun SHIN, Chang-Hyun KIM, Oh-Woog KWON, Ki-Young LEE, Young-Ae SEO, Sung-Kwon CHOI, Yun JIN, Eun-Jin PARK, Jin-Xia HUANG, Seung-Hoon NA, Yoon-Hyung ROH, Sang-Keun JUNG, Young-Kil KIM, Sang-Kyu PARK
  • Publication number: 20140297263
    Abstract: A translation verification method using an animation may include the processes of analyzing an originally input sentence in a first language using a translation engine so that the sentence in the first language is converted into a second language, generating an animation capable of representing the meaning of the sentence in the first language based on information on the results of the analysis of the sentence in the first language, and providing the original and the generated animation to a user who uses the original in order for the user to check for errors in the translation.
    Type: Application
    Filed: June 20, 2013
    Publication date: October 2, 2014
    Inventors: Chang Hyun KIM, Young Kil KIM, Oh Woog KWON, Seung-Hoon NA, Yoon-Hyung ROH, Young-Ae SEO, SEONG IL YANG, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Yun JIN, Eun Jin PARK, Jong Hun SHIN, Jinxia HUANG, Sang Kyu PARK
  • Publication number: 20140297257
    Abstract: Disclosed herein is a motion sensor-based portable automatic interpretation apparatus and control method thereof, which can precisely detect the start time and the end time of utterance of a user in a portable automatic interpretation system, thus improving the quality of the automatic interpretation system. The motion sensor-based portable automatic interpretation apparatus includes a motion sensing unit for sensing a motion of the portable automatic interpretation apparatus. An utterance start time detection unit detects an utterance start time based on an output signal of the motion sensing unit. An utterance end time detection unit detects an utterance end time based on an output signal of the motion sensing unit after the utterance start time has been detected.
    Type: Application
    Filed: October 29, 2013
    Publication date: October 2, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong-Hun SHIN, Young-Kil KIM, Chang-Hyun KIM, Young-Ae SEO, Seong-Il YANG, Jin-Xia HUANG, Seung-Hoon NA, Oh-Woog KWON, Ki-Young LEE, Yoon-Hyung ROH, Sung-Kwon CHOI, Sang-Keun JUNG, Yun JIN, Eun-Jin PARK, Sang-Kyu PARK
  • Publication number: 20140172411
    Abstract: Provided are an apparatus and a method for verifying a context that verify an ambiguous expression of an input text through a user's intention and utilize the verified expression as a context in interpretation and translation. The apparatus includes: an ambiguous expression verifying unit verifying an ambiguous expression in a first input text or a back translation text for the first input text in accordance with user's input; a context generating unit generating the verified ambiguous expression as a context; and a context controlling unit controlling the generated context to be applied to translate or interpret the first input text or second input texts input after the first input text.
    Type: Application
    Filed: September 27, 2013
    Publication date: June 19, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Chang Hyun KIM, Oh Woog Kwon, Seung Hoon Na, Yoon Hyung Roh, Young Ae Seo, Seong Il Yang, Ki Young Lee, Sang Keun Jung, Sung Kwon Choi, Yun Jin, Eun Jin Park, Jong Hun Shin, Jinxia Huang, Sang Kyu Park
  • Patent number: 8635060
    Abstract: A foreign language writing service method includes: recognizing, when a mixed text of foreign language portions and mother tongue portions is entered by a learner, the mother tongue portions from the mixed text; translating the mother tongue portions; combining a mother tongue translation result with the foreign language portions of the mixed text to generate a combined text; and providing the learner with the combined text of the mother tongue translation result and the foreign language portions of the mixed text.
    Type: Grant
    Filed: June 29, 2010
    Date of Patent: January 21, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young Ae Seo, Chang Hyun Kim, Seong Il Yang, Jinxia Huang, Sung Kwon Choi, Ki Young Lee, Yoon Hyung Roh, Oh Woog Kwon, Yun Jin, Ying Shun Wu, Eun Jin Park, Young Kil Kim, Sang Kyu Park
  • Patent number: 8606559
    Abstract: A method for automatically detecting errors in machine translation using a parallel corpus includes analyzing morphemes of a target language sentence in the parallel corpus and a machine-translated target language sentence, corresponding to a source language sentence, to classify the morphemes into words; aligning by words and decoding, respectively, a group of the source language sentence and the machine-translated target language sentence, and a group of the source language sentence and the target language sentence in the parallel corpus; classifying by types errors in the machine-translated target language sentence by making a comparison, word by word, between the decoded target language sentence in the parallel corpus and the decoded machine-translated target language sentence; and computing error information in the machine-translated target language sentence by examining a frequency of occurrence of the classified error types.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: December 10, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Yun Jin, Oh Woog Kwon, Ying Shun Wu, Changhao Yin, Sung Kwon Choi, Chang Hyun Kim, Seong Il Yang, Ki Young Lee, Yoon Hyung Roh, Young Ao Seo, Eun Jin Park, Young Kii Kim, Sang Kyu Park
  • Publication number: 20130297284
    Abstract: Disclosed herein are an apparatus and method for generating polite expressions for automatic translation. The apparatus includes a relationship recognition unit, a polite level selection unit, and a translation unit. The relationship recognition unit extracts relationship information from a conversation between first and second language users and personal information of the first and second language users and then recognizes the social relationship between the language users. The polite level selection unit selects a polite level for the conversation between the first and second language users based on the extracted relationship information. The translation unit generates polite expressions corresponding to the selected polite level, and translates the conversation between the first and second language users into a target language based on the generated polite expressions.
    Type: Application
    Filed: August 3, 2012
    Publication date: November 7, 2013
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sung-Kwon Choi, Oh-Woog Kwon, Ki-Young Lee, Yoon-Hyung Roh, Young-Kil Kim, Sang-Kyu Park
  • Patent number: 8504350
    Abstract: A user-interactive automatic translation device for a mobile device, includes: a camera image controller for converting an image captured by a camera into a digital image; an image character recognition controller for user-interactively selecting a character string region to be translated from the digital image, performing a character recognition function on the selected character string region based on an optical character reader (OCR) function and character recognition information to generate a text string; and user-interactively correcting errors included in the text string.
    Type: Grant
    Filed: December 18, 2009
    Date of Patent: August 6, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ki Young Lee, Oh Woog Kwon, Sung Kwon Choi, Yoon-Hyung Roh, Chang Hyun Kim, Young-Ae Seo, Seong Il Yang, Yun Jin, Jinxia Huang, Yingshun Wu, Eunjin Park, Young Kil Kim, Sang Kyu Park
  • Patent number: 8494835
    Abstract: A post-editing apparatus for correcting translation errors, includes: a translation error search unit for estimating translation errors using an error-specific language model suitable for a type of error desired to be estimated from translation result obtained using a translation system, and determining an order of correction of the translation errors; and a corrected word candidate generator for sequentially generating error-corrected word candidates for respective estimated translation errors on a basis of analysis of an original text of the translation system. The post-editing apparatus further includes a corrected word selector for selecting a final corrected word from among the error-corrected word candidates by using the error-specific language model suitable for the type of error desired to be corrected, and incorporating the final corrected word in the translation result, thus correcting the translation errors.
    Type: Grant
    Filed: November 19, 2009
    Date of Patent: July 23, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young Ae Seo, Chang Hyun Kim, Seong Il Yang, Changhao Yin, Yun Jin, Jinxia Huang, Sung Kwon Choi, Ki Young Lee, Oh Woog Kwon, Yoon Hyung Roh, Eun Jin Park, Ying Shun Wu, Young Kil Kim, Sang Kyu Park
  • Patent number: 8457947
    Abstract: A hybrid translation apparatus includes a source language input unit for generalizing an input source language sentence for each node; a statistics-based translation knowledge database(DB) for storing learning data generalized for each node to be acquired; a first translation result generating unit for transforming the source language sentence generalized for each node into a node expression using the statistics-based translation knowledge to generate a first translation result; and a second translation result generating unit for repeatedly performing the generation of a target word for each node on the first translation result using pattern-based knowledge to generate a second translation result as target words for the respective nodes.
    Type: Grant
    Filed: January 4, 2010
    Date of Patent: June 4, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seong Il Yang, Young Kil Kim, Chang Hyun Kim, Oh Woog Kwon, Yun Jin, Eun Jin Park, Young Ae Seo, Sung Kwon Choi, Jinxia Huang, Yoon Hyung Roh, Ying Shun Wu, Ki Young Lee, Sang Kyu Park