Patents by Inventor Young-Ae Seo
Young-Ae Seo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11436418Abstract: Provided are a system and method for automatically translating characters in an image. In the system for automatically translating characters in an image, a processor determines, after a translation request is input, whether a signal input through an input and output interface is a character region selection signal or an autofocus request signal by analyzing the input signal, acquires a translation target region on the basis of a determination result, recognizes characters in the acquired translation target region, and then translates the recognized characters.Type: GrantFiled: November 13, 2019Date of Patent: September 6, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventor: Young Ae Seo
-
Publication number: 20220129643Abstract: Provided is a method of training a real-time simultaneous interpretation model based on external alignment information, the method including: receiving a bilingual corpus having a source language sentence as an input text and a target language sentence as an output text; generating alignment information corresponding to words or tokens (hereinafter, words) in the bilingual corpus; determining a second action following a first action in the simultaneous interpretation model on the basis of the alignment information to generate action sequence information; and training the simultaneous interpretation model on the basis of the bilingual corpus and the action sequence information, wherein the first action and the second action represent a read action of reading the word in the input text or a write action of outputting an intermediate interpretation result corresponding to the read action performed up to a present.Type: ApplicationFiled: September 24, 2021Publication date: April 28, 2022Inventor: Young Ae SEO
-
Patent number: 11301625Abstract: A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.Type: GrantFiled: November 13, 2019Date of Patent: April 12, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Yoon Hyung Roh, Jong Hun Shin, Young Ae Seo
-
Publication number: 20200184021Abstract: Provided are a system and method for automatically translating characters in an image. In the system for automatically translating characters in an image, a processor determines, after a translation request is input, whether a signal input through an input and output interface is a character region selection signal or an autofocus request signal by analyzing the input signal, acquires a translation target region on the basis of a determination result, recognizes characters in the acquired translation target region, and then translates the recognized characters.Type: ApplicationFiled: November 13, 2019Publication date: June 11, 2020Inventor: Young Ae SEO
-
Publication number: 20200159822Abstract: A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.Type: ApplicationFiled: November 13, 2019Publication date: May 21, 2020Inventors: Yoon Hyung ROH, Jong Hun SHIN, Young Ae SEO
-
Patent number: 9618352Abstract: An apparatus and method for controlling a navigator are disclosed herein. The apparatus includes a natural voice command acquisition unit, an information acquisition unit, a speech language understanding unit, a related information extraction unit, and a dialog management control unit. The natural voice command acquisition unit obtains a natural voice command from a user. The information acquisition unit obtains vehicle data including information about the operation of the navigator and information about the state of the vehicle. The speech language understanding unit converts the natural voice command into a user intention that can be understood by a computer. The related information extraction unit extracts related information that corresponds to the user intention. The dialog management control unit generates a response to the natural voice command based on the related information, the user intention and a dialog history, and controls the navigator in accordance with the conversation response.Type: GrantFiled: March 27, 2015Date of Patent: April 11, 2017Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Oh-Woog Kwon, Young-Kil Kim, Chang-Hyun Kim, Seung-Hoon Na, Yoon-Hyung Roh, Young-Ae Seo, Ki-Young Lee, Sang-Keun Jung, Sung-Kwon Choi, Yun Jin, Eun-Jin Park, Jong-Hun Shin, Jinxia Huang
-
Patent number: 9230544Abstract: The present invention relates to a spoken dialog system and method based on dual dialog management using a hierarchical dialog task library that may increase reutilization of dialog knowledge by constructing and packaging the dialog knowledge based on a task unit having a hierarchical structure, and may construct and process the dialog knowledge using a dialog plan scheme about relationship therebetween by classifying the dialog knowledge based on a task unit to make design of a dialog service convenient, which is different from an existing spoken dialog system in which it is difficult to reuse dialog knowledge since a large amount of construction costs and time is required.Type: GrantFiled: March 25, 2013Date of Patent: January 5, 2016Assignee: Electronics and Telecommunications Research InstituteInventors: Oh Woog Kwon, Yoon Hyung Roh, Seong Il Yang, Ki Young Lee, Sang Keun Jung, Sung Kwon Choi, Eun Jin Park, Jinxia Huang, Young Kil Kim, Chang Hyun Kim, Seung Hoon Na, Young Ae Seo, Yun Jin, Jong Hun Shin, Sang Kyu Park
-
Publication number: 20150276424Abstract: An apparatus and method for controlling a navigator are disclosed herein. The apparatus includes a natural voice command acquisition unit, an information acquisition unit, a speech language understanding unit, a related information extraction unit, and a dialogue management control unit. The natural voice command acquisition unit obtains a natural voice command from a user. The information acquisition unit obtains vehicle data including information about the operation of the navigator and information about the state of the vehicle. The speech language understanding unit converts the natural voice command into a user intention that can be understood by a computer. The related information extraction unit extracts related information that corresponds to the user intention. The dialogue management control unit generates a response to the natural voice command based on the related information, the user intention and a dialogue history, and controls the navigator in accordance with the conversation response.Type: ApplicationFiled: March 27, 2015Publication date: October 1, 2015Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Oh-Woog KWON, Young-Kil Kim, Chang-Hyun Kim, Seung-Hoon Na, Yoon-Hyung Roh, Young-Ae Seo, Ki-Young Lee, Sang-Keun Jung, Sung-Kwon Choi, Yun Jin, Eun-Jin Park, Jong-Hun Shin, Jinxia Huang
-
Publication number: 20150227510Abstract: The present invention relates to a translation function and discloses an automatic translation operating device, including: at least one of voice input devices which collects voice signals input by a plurality of speakers and a communication module which receives the voice signals; and a control unit which controls to classify voice signals by speakers from the voice signals and cluster the speaker based voice signals classified in accordance with a predefined condition and then perform voice recognition and translation and a method thereof and a system including the same.Type: ApplicationFiled: January 28, 2015Publication date: August 13, 2015Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jong Hun SHIN, Ki Young LEE, Young Ae SEO, Jin Xia HUANG, Sung Kwon CHOI, Yun JIN, Chang Hyun KIM, Seung Hoon NA, Yoon Hyung ROH, Oh Woog KWON, Sang Keun JUNG, Eun Jin PARK, Kang Il KIM, Young Kil KIM, Sang Kyu PARK
-
Publication number: 20150199340Abstract: The present invention relates to a system for translating a language based on a user's reaction, the system includes an interface unit which inputs uttered sentences of the first user and the second user and outputs the translated result; a translating unit which translates the uttered sentences of the first user and the second user; a conversation intention recognizing unit which figures out a conversation intention of the second user from the reply of the second user for a translation result of the utterance of the first user; a translation result evaluating unit which evaluates the translation of the uttered sentence of the first user based on the conversation intention of the second user which is determined by the conversation intention recognizing unit; and a translation result evaluation storing unit which stores the translation result and an evaluation of the translation result.Type: ApplicationFiled: January 12, 2015Publication date: July 16, 2015Applicant: Electronics and Telecommunications Research InstituteInventors: Oh Woog KWON, Young Kil KIM, Chang Hyun KIM, Seung Hoon NA, Yoon Hyung ROH, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Yun JIN, Eun Jin PARK, Jong Hun SHIN, Jin Xia HUANG, Kang Il KIM, Young Ae SEO, Sang Kyu PARK
-
Publication number: 20150193410Abstract: The present invention relates to a system for editing a text of a portable terminal and a method thereof, and more particularly to a technology which edits a text which is input into a portable terminal based on a touch interface. An exemplary embodiment of the present invention provides a text editing system of a portable terminal, including: an interface unit which inputs or outputs a text or voice; a text generating unit which generates the input text or voice as a text; a control unit which provides a keyboard based editing screen or a character recognition based editing screen for the generated text through the interface unit; and a text editing unit which performs an editing command which is input from a user through the keyboard based editing screen or the character recognition based editing screen under the control of the control unit.Type: ApplicationFiled: September 12, 2014Publication date: July 9, 2015Inventors: Yun JIN, Chang Hyun KIM, Young Ae SEO, Jin Xia HUANG, Oh Woog KWON, Seung Hoon NA, Yoon Hyung ROH, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Jong Hun SHIN, Eun Jin PARK, Kang Il KIM, Young Kil KIM, Sang Kyu PARK
-
Patent number: 9037449Abstract: A method for establishing paraphrasing data for a machine translation system includes selecting a paraphrasing target sentence through application of an object language model to a translated sentence that is obtained by machine-translating a source language sentence, extracting paraphrasing candidates that can be paraphrased with the paraphrasing target sentence from a source language corpus DB, performing machine translation with respect to the paraphrasing candidates, selecting a final paraphrasing candidate by applying the object language model to the result of the machine translation with respect to the paraphrasing candidates, and confirming the paraphrasing target sentence and the final paraphrasing candidate as paraphrasing lexical patterns using a bilingual corpus and storing the paraphrasing lexical patterns in a paraphrasing DB. According to the present invention, the consistent paraphrasing data can be established since the paraphrasing data is automatically established.Type: GrantFiled: October 31, 2012Date of Patent: May 19, 2015Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Chang Hyun Kim, Young-Ae Seo, Seong Il Yang, Jinxia Huang, Jong Hun Shin, Young Kil Kim, Sang Kyu Park
-
Publication number: 20150127361Abstract: Disclosed are an automatic translation apparatus and method capable of optimizing limited translation knowledge in a database mounted in a portable mobile communication terminal, obtaining translation knowledge from external servers in order to provide translation knowledge appropriate for respective users, and effectively updating the database mounted in the terminal.Type: ApplicationFiled: October 2, 2014Publication date: May 7, 2015Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jong-Hun SHIN, Chang-Hyun KIM, Oh-Woog KWON, Ki-Young LEE, Young-Ae SEO, Sung-Kwon CHOI, Yun JIN, Eun-Jin PARK, Jin-Xia HUANG, Seung-Hoon NA, Yoon-Hyung ROH, Sang-Keun JUNG, Young-Kil KIM, Sang-Kyu PARK
-
Publication number: 20140297263Abstract: A translation verification method using an animation may include the processes of analyzing an originally input sentence in a first language using a translation engine so that the sentence in the first language is converted into a second language, generating an animation capable of representing the meaning of the sentence in the first language based on information on the results of the analysis of the sentence in the first language, and providing the original and the generated animation to a user who uses the original in order for the user to check for errors in the translation.Type: ApplicationFiled: June 20, 2013Publication date: October 2, 2014Inventors: Chang Hyun KIM, Young Kil KIM, Oh Woog KWON, Seung-Hoon NA, Yoon-Hyung ROH, Young-Ae SEO, SEONG IL YANG, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Yun JIN, Eun Jin PARK, Jong Hun SHIN, Jinxia HUANG, Sang Kyu PARK
-
Publication number: 20140297257Abstract: Disclosed herein is a motion sensor-based portable automatic interpretation apparatus and control method thereof, which can precisely detect the start time and the end time of utterance of a user in a portable automatic interpretation system, thus improving the quality of the automatic interpretation system. The motion sensor-based portable automatic interpretation apparatus includes a motion sensing unit for sensing a motion of the portable automatic interpretation apparatus. An utterance start time detection unit detects an utterance start time based on an output signal of the motion sensing unit. An utterance end time detection unit detects an utterance end time based on an output signal of the motion sensing unit after the utterance start time has been detected.Type: ApplicationFiled: October 29, 2013Publication date: October 2, 2014Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jong-Hun SHIN, Young-Kil KIM, Chang-Hyun KIM, Young-Ae SEO, Seong-Il YANG, Jin-Xia HUANG, Seung-Hoon NA, Oh-Woog KWON, Ki-Young LEE, Yoon-Hyung ROH, Sung-Kwon CHOI, Sang-Keun JUNG, Yun JIN, Eun-Jin PARK, Sang-Kyu PARK
-
Publication number: 20140172411Abstract: Provided are an apparatus and a method for verifying a context that verify an ambiguous expression of an input text through a user's intention and utilize the verified expression as a context in interpretation and translation. The apparatus includes: an ambiguous expression verifying unit verifying an ambiguous expression in a first input text or a back translation text for the first input text in accordance with user's input; a context generating unit generating the verified ambiguous expression as a context; and a context controlling unit controlling the generated context to be applied to translate or interpret the first input text or second input texts input after the first input text.Type: ApplicationFiled: September 27, 2013Publication date: June 19, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Chang Hyun KIM, Oh Woog Kwon, Seung Hoon Na, Yoon Hyung Roh, Young Ae Seo, Seong Il Yang, Ki Young Lee, Sang Keun Jung, Sung Kwon Choi, Yun Jin, Eun Jin Park, Jong Hun Shin, Jinxia Huang, Sang Kyu Park
-
Patent number: 8635060Abstract: A foreign language writing service method includes: recognizing, when a mixed text of foreign language portions and mother tongue portions is entered by a learner, the mother tongue portions from the mixed text; translating the mother tongue portions; combining a mother tongue translation result with the foreign language portions of the mixed text to generate a combined text; and providing the learner with the combined text of the mother tongue translation result and the foreign language portions of the mixed text.Type: GrantFiled: June 29, 2010Date of Patent: January 21, 2014Assignee: Electronics and Telecommunications Research InstituteInventors: Young Ae Seo, Chang Hyun Kim, Seong Il Yang, Jinxia Huang, Sung Kwon Choi, Ki Young Lee, Yoon Hyung Roh, Oh Woog Kwon, Yun Jin, Ying Shun Wu, Eun Jin Park, Young Kil Kim, Sang Kyu Park
-
Publication number: 20130346060Abstract: Disclosed herein are a translation interfacing apparatus and method using vision tracking. The translation interfacing apparatus includes a vision tracking unit, a comparison unit, a sentence detection unit, a sentence translation unit, and a sentence output unit. The vision tracking unit tracks a user's eyes based on one or more images input via the camera of a portable terminal, and extracts time information about a period for which the user's eyes have been fixed and location information about a location on which the user's eyes are focused. The comparison unit compares the time information with a preset eye fixation period. The sentence detection unit detects a sentence corresponding to the location information if the time information is equal to or longer than the eye fixation period. The sentence translation unit translates the detected sentence.Type: ApplicationFiled: June 6, 2013Publication date: December 26, 2013Inventors: Jong-Hun SHIN, Young-Ae Seo, Seong-II Yang, Jin-Xia Huang, Chang-Hyun Kim, Young-Kll Kim
-
Patent number: 8504350Abstract: A user-interactive automatic translation device for a mobile device, includes: a camera image controller for converting an image captured by a camera into a digital image; an image character recognition controller for user-interactively selecting a character string region to be translated from the digital image, performing a character recognition function on the selected character string region based on an optical character reader (OCR) function and character recognition information to generate a text string; and user-interactively correcting errors included in the text string.Type: GrantFiled: December 18, 2009Date of Patent: August 6, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Ki Young Lee, Oh Woog Kwon, Sung Kwon Choi, Yoon-Hyung Roh, Chang Hyun Kim, Young-Ae Seo, Seong Il Yang, Yun Jin, Jinxia Huang, Yingshun Wu, Eunjin Park, Young Kil Kim, Sang Kyu Park
-
Patent number: 8494835Abstract: A post-editing apparatus for correcting translation errors, includes: a translation error search unit for estimating translation errors using an error-specific language model suitable for a type of error desired to be estimated from translation result obtained using a translation system, and determining an order of correction of the translation errors; and a corrected word candidate generator for sequentially generating error-corrected word candidates for respective estimated translation errors on a basis of analysis of an original text of the translation system. The post-editing apparatus further includes a corrected word selector for selecting a final corrected word from among the error-corrected word candidates by using the error-specific language model suitable for the type of error desired to be corrected, and incorporating the final corrected word in the translation result, thus correcting the translation errors.Type: GrantFiled: November 19, 2009Date of Patent: July 23, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Young Ae Seo, Chang Hyun Kim, Seong Il Yang, Changhao Yin, Yun Jin, Jinxia Huang, Sung Kwon Choi, Ki Young Lee, Oh Woog Kwon, Yoon Hyung Roh, Eun Jin Park, Ying Shun Wu, Young Kil Kim, Sang Kyu Park