Patents by Inventor JONG HUN SHIN

JONG HUN SHIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240138077
    Abstract: A circuit board according to an embodiment includes a first insulating layer; a second insulating layer disposed on the first insulating layer and including a cavity; a pad disposed on the first insulating layer and having a top surface exposed through the cavity; wherein the cavity of the second insulating layer includes: a bottom surface positioned higher than the top surface of the first insulating layer; and an inner wall extending from the bottom surface, wherein the inner wall includes: a first inner wall extending from the bottom surface and having a first inclination angle; and a second inner wall extending from the first inner wall and having a second inclination angle different from the first inclination angle.
    Type: Application
    Filed: April 25, 2021
    Publication date: April 25, 2024
    Inventors: Jong Bae SHIN, Soo Min LEE, Jae Hun JEONG
  • Publication number: 20240120243
    Abstract: A circuit board according to an embodiment includes a first insulating layer; a second insulating layer disposed on the first insulating layer and including a cavity; and a plurality of pads disposed on the first insulating layer and having top surfaces exposed through the cavity; wherein the cavity of the second insulating layer includes: a bottom surface positioned higher than a top surface of the first insulating layer; and an inner wall extending from the bottom surface, wherein the inner wall is perpendicular to top or bottom surface of the second insulating layer, wherein the bottom surface of the cavity includes: a first bottom surface positioned lower than a top surface of the pad and positioned outside an arrangement region of the plurality of pads; and a second bottom surface positioned lower than the top surface of the pad and positioned inside the arrangement region of the plurality of pads, and wherein a height of the first bottom surface is different from a height of the second bottom surface.
    Type: Application
    Filed: April 26, 2021
    Publication date: April 11, 2024
    Inventors: Jong Bae SHIN, Moo Seong KIM, Soo Min LEE, Jae Hun JEONG
  • Publication number: 20240098275
    Abstract: A method for decoding an image based on an intra prediction, comprising: obtaining a first prediction pixel of a first region in a current block by using a neighboring pixel adjacent to the current block; obtaining a second prediction pixel of a second region in the current block by using the first prediction pixel of the first region; and decoding the current block based on the first and the second prediction pixels.
    Type: Application
    Filed: November 15, 2023
    Publication date: March 21, 2024
    Inventors: Je Chang JEONG, Ki Baek KIM, Won Jin LEE, Hye Jin SHIN, Jong Sang YOO, Jang Hyeok YUN, Kyung Jun LEE, Jae Hun KIM, Sang Gu LEE
  • Publication number: 20220222448
    Abstract: Provided is a method of providing an interpretation result using visual information, and the method includes: acquiring a spatial domain image including line-of-sight information of a user and gaze position information in the spatial domain image; segmenting the acquired spatial domain image into a plurality of images; detecting text areas including text for each of the segmented images; generating text blocks, each of which is a text recognition result for each of the detected text areas, and determining the text block corresponding to the gaze position information; converting a first language included in the determined text block into a second language that is a target language; and providing the user with a conversion result of the second language.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 14, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jinxia HUANG, Jong Hun SHIN
  • Patent number: 11301625
    Abstract: A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: April 12, 2022
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yoon Hyung Roh, Jong Hun Shin, Young Ae Seo
  • Publication number: 20200159822
    Abstract: A simultaneous interpretation system using a translation unit bilingual corpus includes a microphone configured to receive an utterance of a user, a memory in which a program for recognizing the utterance of the user and generating a translation result is stored, and a processor configured to execute the program stored in the memory, wherein the processor executes the program so as to convert the received utterance of the user into text, store the text in a speech recognition buffer, perform translation unit recognition with respect to the text on the basis of a learning model for translation unit recognition, and in response to the translation unit recognition being completed, generate a translation result corresponding to the translation unit on the basis of a translation model for translation performance.
    Type: Application
    Filed: November 13, 2019
    Publication date: May 21, 2020
    Inventors: Yoon Hyung ROH, Jong Hun SHIN, Young Ae SEO
  • Patent number: 10635753
    Abstract: The present invention provides a method of generating training data to which explicit word-alignment information is added without impairing sub-word tokens, and a neural machine translation method and apparatus including the method. The method of generating training data includes the steps of: (1) separating basic word boundaries through morphological analysis or named entity recognition of a sentence of a bilingual corpus used for learning; (2) extracting explicit word-alignment information from the sentence of the bilingual corpus used for learning; (3) further dividing the word boundaries separated in step (1) into sub-word tokens; (4) generating new source language training data by using an output from the step (1) and an output from the step (3); and (5) generating new target language training data by using the explicit word-alignment information generated in the step (2) and the target language outputs from the steps (1) and (3).
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: April 28, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Jong Hun Shin
  • Publication number: 20190129947
    Abstract: The present invention provides a method of generating training data to which explicit word-alignment information is added without impairing sub-word tokens, and a neural machine translation method and apparatus including the method. The method of generating training data includes the steps of: (1) separating basic word boundaries through morphological analysis or named entity recognition of a sentence of a bilingual corpus used for learning; (2) extracting explicit word-alignment information from the sentence of the bilingual corpus used for learning; (3) further dividing the word boundaries separated in step (1) into sub-word tokens; (4) generating new source language training data by using an output from the step (1) and an output from the step (3); and (5) generating new target language training data by using the explicit word-alignment information generated in the step (2 ) and the target language outputs from the steps (1) and (3).
    Type: Application
    Filed: April 4, 2018
    Publication date: May 2, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Jong Hun SHIN
  • Publication number: 20180335215
    Abstract: Provided is a DC electric furnace using a DC electrode, the DC electric furnace including: a body that has a bottom surface on which a tapping hole is formed and has an inner space to which scraps are charged; a lower electrode that is mounted on the bottom surface of the body; and a bottom-blowing means that is provided on the bottom surface that is not interfered by the lower electrode and blows gas to the inner space of the body.
    Type: Application
    Filed: December 24, 2015
    Publication date: November 22, 2018
    Inventors: Sang Chae PARK, Jong Hun SHIN, Hyun Seo PARK, Sung Mo SEO, Chang Hun KEUM
  • Patent number: 9618352
    Abstract: An apparatus and method for controlling a navigator are disclosed herein. The apparatus includes a natural voice command acquisition unit, an information acquisition unit, a speech language understanding unit, a related information extraction unit, and a dialog management control unit. The natural voice command acquisition unit obtains a natural voice command from a user. The information acquisition unit obtains vehicle data including information about the operation of the navigator and information about the state of the vehicle. The speech language understanding unit converts the natural voice command into a user intention that can be understood by a computer. The related information extraction unit extracts related information that corresponds to the user intention. The dialog management control unit generates a response to the natural voice command based on the related information, the user intention and a dialog history, and controls the navigator in accordance with the conversation response.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: April 11, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Oh-Woog Kwon, Young-Kil Kim, Chang-Hyun Kim, Seung-Hoon Na, Yoon-Hyung Roh, Young-Ae Seo, Ki-Young Lee, Sang-Keun Jung, Sung-Kwon Choi, Yun Jin, Eun-Jin Park, Jong-Hun Shin, Jinxia Huang
  • Patent number: 9230544
    Abstract: The present invention relates to a spoken dialog system and method based on dual dialog management using a hierarchical dialog task library that may increase reutilization of dialog knowledge by constructing and packaging the dialog knowledge based on a task unit having a hierarchical structure, and may construct and process the dialog knowledge using a dialog plan scheme about relationship therebetween by classifying the dialog knowledge based on a task unit to make design of a dialog service convenient, which is different from an existing spoken dialog system in which it is difficult to reuse dialog knowledge since a large amount of construction costs and time is required.
    Type: Grant
    Filed: March 25, 2013
    Date of Patent: January 5, 2016
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Oh Woog Kwon, Yoon Hyung Roh, Seong Il Yang, Ki Young Lee, Sang Keun Jung, Sung Kwon Choi, Eun Jin Park, Jinxia Huang, Young Kil Kim, Chang Hyun Kim, Seung Hoon Na, Young Ae Seo, Yun Jin, Jong Hun Shin, Sang Kyu Park
  • Publication number: 20150276424
    Abstract: An apparatus and method for controlling a navigator are disclosed herein. The apparatus includes a natural voice command acquisition unit, an information acquisition unit, a speech language understanding unit, a related information extraction unit, and a dialogue management control unit. The natural voice command acquisition unit obtains a natural voice command from a user. The information acquisition unit obtains vehicle data including information about the operation of the navigator and information about the state of the vehicle. The speech language understanding unit converts the natural voice command into a user intention that can be understood by a computer. The related information extraction unit extracts related information that corresponds to the user intention. The dialogue management control unit generates a response to the natural voice command based on the related information, the user intention and a dialogue history, and controls the navigator in accordance with the conversation response.
    Type: Application
    Filed: March 27, 2015
    Publication date: October 1, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Oh-Woog KWON, Young-Kil Kim, Chang-Hyun Kim, Seung-Hoon Na, Yoon-Hyung Roh, Young-Ae Seo, Ki-Young Lee, Sang-Keun Jung, Sung-Kwon Choi, Yun Jin, Eun-Jin Park, Jong-Hun Shin, Jinxia Huang
  • Publication number: 20150227510
    Abstract: The present invention relates to a translation function and discloses an automatic translation operating device, including: at least one of voice input devices which collects voice signals input by a plurality of speakers and a communication module which receives the voice signals; and a control unit which controls to classify voice signals by speakers from the voice signals and cluster the speaker based voice signals classified in accordance with a predefined condition and then perform voice recognition and translation and a method thereof and a system including the same.
    Type: Application
    Filed: January 28, 2015
    Publication date: August 13, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong Hun SHIN, Ki Young LEE, Young Ae SEO, Jin Xia HUANG, Sung Kwon CHOI, Yun JIN, Chang Hyun KIM, Seung Hoon NA, Yoon Hyung ROH, Oh Woog KWON, Sang Keun JUNG, Eun Jin PARK, Kang Il KIM, Young Kil KIM, Sang Kyu PARK
  • Publication number: 20150199340
    Abstract: The present invention relates to a system for translating a language based on a user's reaction, the system includes an interface unit which inputs uttered sentences of the first user and the second user and outputs the translated result; a translating unit which translates the uttered sentences of the first user and the second user; a conversation intention recognizing unit which figures out a conversation intention of the second user from the reply of the second user for a translation result of the utterance of the first user; a translation result evaluating unit which evaluates the translation of the uttered sentence of the first user based on the conversation intention of the second user which is determined by the conversation intention recognizing unit; and a translation result evaluation storing unit which stores the translation result and an evaluation of the translation result.
    Type: Application
    Filed: January 12, 2015
    Publication date: July 16, 2015
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Oh Woog KWON, Young Kil KIM, Chang Hyun KIM, Seung Hoon NA, Yoon Hyung ROH, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Yun JIN, Eun Jin PARK, Jong Hun SHIN, Jin Xia HUANG, Kang Il KIM, Young Ae SEO, Sang Kyu PARK
  • Publication number: 20150193410
    Abstract: The present invention relates to a system for editing a text of a portable terminal and a method thereof, and more particularly to a technology which edits a text which is input into a portable terminal based on a touch interface. An exemplary embodiment of the present invention provides a text editing system of a portable terminal, including: an interface unit which inputs or outputs a text or voice; a text generating unit which generates the input text or voice as a text; a control unit which provides a keyboard based editing screen or a character recognition based editing screen for the generated text through the interface unit; and a text editing unit which performs an editing command which is input from a user through the keyboard based editing screen or the character recognition based editing screen under the control of the control unit.
    Type: Application
    Filed: September 12, 2014
    Publication date: July 9, 2015
    Inventors: Yun JIN, Chang Hyun KIM, Young Ae SEO, Jin Xia HUANG, Oh Woog KWON, Seung Hoon NA, Yoon Hyung ROH, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Jong Hun SHIN, Eun Jin PARK, Kang Il KIM, Young Kil KIM, Sang Kyu PARK
  • Patent number: 9037449
    Abstract: A method for establishing paraphrasing data for a machine translation system includes selecting a paraphrasing target sentence through application of an object language model to a translated sentence that is obtained by machine-translating a source language sentence, extracting paraphrasing candidates that can be paraphrased with the paraphrasing target sentence from a source language corpus DB, performing machine translation with respect to the paraphrasing candidates, selecting a final paraphrasing candidate by applying the object language model to the result of the machine translation with respect to the paraphrasing candidates, and confirming the paraphrasing target sentence and the final paraphrasing candidate as paraphrasing lexical patterns using a bilingual corpus and storing the paraphrasing lexical patterns in a paraphrasing DB. According to the present invention, the consistent paraphrasing data can be established since the paraphrasing data is automatically established.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: May 19, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chang Hyun Kim, Young-Ae Seo, Seong Il Yang, Jinxia Huang, Jong Hun Shin, Young Kil Kim, Sang Kyu Park
  • Publication number: 20150127361
    Abstract: Disclosed are an automatic translation apparatus and method capable of optimizing limited translation knowledge in a database mounted in a portable mobile communication terminal, obtaining translation knowledge from external servers in order to provide translation knowledge appropriate for respective users, and effectively updating the database mounted in the terminal.
    Type: Application
    Filed: October 2, 2014
    Publication date: May 7, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong-Hun SHIN, Chang-Hyun KIM, Oh-Woog KWON, Ki-Young LEE, Young-Ae SEO, Sung-Kwon CHOI, Yun JIN, Eun-Jin PARK, Jin-Xia HUANG, Seung-Hoon NA, Yoon-Hyung ROH, Sang-Keun JUNG, Young-Kil KIM, Sang-Kyu PARK
  • Publication number: 20140297263
    Abstract: A translation verification method using an animation may include the processes of analyzing an originally input sentence in a first language using a translation engine so that the sentence in the first language is converted into a second language, generating an animation capable of representing the meaning of the sentence in the first language based on information on the results of the analysis of the sentence in the first language, and providing the original and the generated animation to a user who uses the original in order for the user to check for errors in the translation.
    Type: Application
    Filed: June 20, 2013
    Publication date: October 2, 2014
    Inventors: Chang Hyun KIM, Young Kil KIM, Oh Woog KWON, Seung-Hoon NA, Yoon-Hyung ROH, Young-Ae SEO, SEONG IL YANG, Ki Young LEE, Sang Keun JUNG, Sung Kwon CHOI, Yun JIN, Eun Jin PARK, Jong Hun SHIN, Jinxia HUANG, Sang Kyu PARK
  • Publication number: 20140297257
    Abstract: Disclosed herein is a motion sensor-based portable automatic interpretation apparatus and control method thereof, which can precisely detect the start time and the end time of utterance of a user in a portable automatic interpretation system, thus improving the quality of the automatic interpretation system. The motion sensor-based portable automatic interpretation apparatus includes a motion sensing unit for sensing a motion of the portable automatic interpretation apparatus. An utterance start time detection unit detects an utterance start time based on an output signal of the motion sensing unit. An utterance end time detection unit detects an utterance end time based on an output signal of the motion sensing unit after the utterance start time has been detected.
    Type: Application
    Filed: October 29, 2013
    Publication date: October 2, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jong-Hun SHIN, Young-Kil KIM, Chang-Hyun KIM, Young-Ae SEO, Seong-Il YANG, Jin-Xia HUANG, Seung-Hoon NA, Oh-Woog KWON, Ki-Young LEE, Yoon-Hyung ROH, Sung-Kwon CHOI, Sang-Keun JUNG, Yun JIN, Eun-Jin PARK, Sang-Kyu PARK
  • Publication number: 20140172411
    Abstract: Provided are an apparatus and a method for verifying a context that verify an ambiguous expression of an input text through a user's intention and utilize the verified expression as a context in interpretation and translation. The apparatus includes: an ambiguous expression verifying unit verifying an ambiguous expression in a first input text or a back translation text for the first input text in accordance with user's input; a context generating unit generating the verified ambiguous expression as a context; and a context controlling unit controlling the generated context to be applied to translate or interpret the first input text or second input texts input after the first input text.
    Type: Application
    Filed: September 27, 2013
    Publication date: June 19, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Chang Hyun KIM, Oh Woog Kwon, Seung Hoon Na, Yoon Hyung Roh, Young Ae Seo, Seong Il Yang, Ki Young Lee, Sang Keun Jung, Sung Kwon Choi, Yun Jin, Eun Jin Park, Jong Hun Shin, Jinxia Huang, Sang Kyu Park