Patents by Inventor Hye-Dong Jung

Hye-Dong Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11741755
    Abstract: A method and apparatus for recognizing a sign language or a gesture by using a three-dimensional (3D) Euclidean distance matrix (EDM) are disclosed. The method includes a two-dimensional (2D) EDM generation step for generating a 2D EDM including information about distances between feature points of a body recognized in image information by a 2D EDM generator, a 3D EDM generation step for receiving the 2D EDM and generating a 3D EDM by using a first deep learning neural network trained with training data in which input data is a 2D EDM and correct answer data is a 3D EDM by a 3D EDM generator, and a recognition step for recognizing a sign language or a gesture based on the 3D EDM.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: August 29, 2023
    Assignee: Korea Electronics Technology Institute
    Inventors: Sang Ki Ko, Hye Dong Jung, Han Mu Park, Chang Jo Kim
  • Patent number: 11482134
    Abstract: Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: October 25, 2022
    Assignee: Korea Electronics Technology Institute
    Inventors: Hye Dong Jung, Sang Ki Ko, Han Mu Park, Chang Jo Kim
  • Patent number: 11386292
    Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: July 12, 2022
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Bo Eun Kim, Hye Dong Jung
  • Publication number: 20210117723
    Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.
    Type: Application
    Filed: September 10, 2020
    Publication date: April 22, 2021
    Inventors: Bo Eun KIM, Hye Dong JUNG
  • Patent number: 10978049
    Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: April 13, 2021
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Young Han Lee, Jong Yeol Yang, Choong Sang Cho, Hye Dong Jung
  • Patent number: 10923106
    Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: February 16, 2021
    Assignee: Korea Electronics Technology Institute
    Inventors: Jong Yeol Yang, Young Han Lee, Choong Sang Cho, Hye Dong Jung
  • Publication number: 20210043110
    Abstract: Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 11, 2021
    Inventors: Hye Dong JUNG, Sang Ki KO, Han Mu PARK, Chang Jo KIM
  • Publication number: 20210034846
    Abstract: A method and apparatus for recognizing a sign language or a gesture by using a three-dimensional (3D) Euclidean distance matrix (EDM) are disclosed. The method includes a two-dimensional (2D) EDM generation step for generating a 2D EDM including information about distances between feature points of a body recognized in image information by a 2D EDM generator, a 3D EDM generation step for receiving the 2D EDM and generating a 3D EDM by using a first deep learning neural network trained with training data in which input data is a 2D EDM and correct answer data is a 3D EDM by a 3D EDM generator, and a recognition step for recognizing a sign language or a gesture based on the 3D EDM.
    Type: Application
    Filed: July 30, 2020
    Publication date: February 4, 2021
    Applicant: Korea Electronics Technology Institute
    Inventors: Sang Ki KO, Hye Dong JUNG, Han Mu PARK, Chang Jo KIM
  • Patent number: 10846568
    Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: November 24, 2020
    Assignee: Korea Electronics Technology Institute
    Inventors: Sang Ki Ko, Choong Sang Cho, Hye Dong Jung, Young Han Lee
  • Patent number: 10726289
    Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: July 28, 2020
    Assignee: Korea Electronics Technology Institute
    Inventors: Bo Eun Kim, Choong Sang Cho, Hye Dong Jung, Young Han Lee
  • Publication number: 20200043465
    Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.
    Type: Application
    Filed: January 24, 2019
    Publication date: February 6, 2020
    Applicant: Korea Electronics Technology Institute
    Inventors: Jong Yeol YANG, Young Han LEE, Choong Sang CHO, Hye Dong JUNG
  • Publication number: 20200043473
    Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.
    Type: Application
    Filed: January 24, 2019
    Publication date: February 6, 2020
    Applicant: Korea Electronics Technology Institute
    Inventors: Young Han LEE, Jong Yeol YANG, Choong Sang CHO, Hye Dong JUNG
  • Publication number: 20200005086
    Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.
    Type: Application
    Filed: October 1, 2018
    Publication date: January 2, 2020
    Applicant: Korea Electronics Technology Institute
    Inventors: Sang Ki KO, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
  • Publication number: 20190286931
    Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.
    Type: Application
    Filed: July 24, 2018
    Publication date: September 19, 2019
    Applicant: Korea Electronics Technology Institute
    Inventors: Bo Eun KIM, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
  • Patent number: 10230615
    Abstract: A method for optimizing network performance according to an embodiment of the present invention includes initializing a size of test data for network performance measurement, performing a test on the network performance by transmitting the test data to each of a first communication protocol and a second communication protocol, repeatedly performing the test, when the size of the test data is increased and then the increased size of the test data is a preset size or smaller based on a comparison between the increased size of the test data and the preset size, and setting a threshold value having a data size being a reference of switching between the first communication protocol and the second communication protocol, based on data collected through the performing of the test, when the increased size of the test data is larger than the preset size.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: March 12, 2019
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventor: Hye Dong Jung
  • Patent number: 9811501
    Abstract: A local processing apparatus and a data transceiving method thereof are provided. The local processing apparatus includes a communication module configured to transceive the data with the one or more distributed storage units, a memory configured to store a program for transceiving the data and the one or more key-value data pairs, and a processor configured to execute the program, the processor confirms whether a first key-value data exists in the memory by executing the program, and determines whether to prefetch one or more key-value data corresponding to the first key-value data based on the confirmation result.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: November 7, 2017
    Assignee: Korea Electronics Technology Institute
    Inventors: Bong Jae Kim, Hye Dong Jung
  • Publication number: 20170118107
    Abstract: A method for optimizing network performance according to an embodiment of the present invention includes initializing a size of test data for network performance measurement, performing a test on the network performance by transmitting the test data to each of a first communication protocol and a second communication protocol, repeatedly performing the test, when the size of the test data is increased and then the increased size of the test data is a preset size or smaller based on a comparison between the increased size of the test data and the preset size, and setting a threshold value having a data size being a reference of switching between the first communication protocol and the second communication protocol, based on data collected through the performing of the test, when the increased size of the test data is larger than the preset size.
    Type: Application
    Filed: October 21, 2016
    Publication date: April 27, 2017
    Inventor: Hye Dong Jung
  • Publication number: 20170116152
    Abstract: A local processing apparatus and a data transceiving method thereof are provided. The local processing apparatus includes a communication module configured to transceive the data with the one or more distributed storage units, a memory configured to store a program for transceiving the data and the one or more key-value data pairs, and a processor configured to execute the program, the processor confirms whether a first key-value data exists in the memory by executing the program, and determines whether to prefetch one or more key-value data corresponding to the first key-value data based on the confirmation result.
    Type: Application
    Filed: October 29, 2015
    Publication date: April 27, 2017
    Applicant: Korea Electronics Technology Institute
    Inventors: Bong Jae KIM, Hye Dong JUNG
  • Publication number: 20150293786
    Abstract: A method for processing a CR algorithm by actively utilizing a shared memory of a multi-processor, and a processor using the same are provided. A processor includes: a first multi-processor configured to process a first group of elements of a matrix in accordance with an algorithm; a second multi-processor configured to process a second group of the elements of the matrix in accordance with the algorithm; and a third multi-processor configured to process a third group which comprises some of the elements of the first group, some of the elements of the second group, and some of the elements which are not comprised in the first group and the second group, in accordance with the algorithm. Accordingly, a TDM having many elements can be calculated fast.
    Type: Application
    Filed: December 9, 2014
    Publication date: October 15, 2015
    Inventors: Hye Dong JUNG, Jae Gi SON
  • Patent number: 8654066
    Abstract: Provided are a display apparatus and a method for controlling a backlight. A display apparatus including a backlight partitioned into a plurality of sections according to an exemplary embodiment of the present invention includes: an external brightness measurer measuring and providing front brightness values of the display apparatus corresponding to the sections; an image signal analyzer analyzing an inputted image signal and calculating and providing a brightness influence value of each section by adjacent sections; and a control signal corrector converting a source backlight control signal of each section corresponding to the image signal into an intermediate backlight control signal of each section on the basis of the front brightness values and the brightness influence values of the sections and comparing the intermediate backlight control signal with the previous final backlight control signal to generate the current final backlight control signal.
    Type: Grant
    Filed: December 23, 2010
    Date of Patent: February 18, 2014
    Assignee: Korea Electronics Technology Institute
    Inventors: Hye Dong Jung, Hyung Su Lee