Patents by Inventor Hye-Dong Jung
Hye-Dong Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11741755Abstract: A method and apparatus for recognizing a sign language or a gesture by using a three-dimensional (3D) Euclidean distance matrix (EDM) are disclosed. The method includes a two-dimensional (2D) EDM generation step for generating a 2D EDM including information about distances between feature points of a body recognized in image information by a 2D EDM generator, a 3D EDM generation step for receiving the 2D EDM and generating a 3D EDM by using a first deep learning neural network trained with training data in which input data is a 2D EDM and correct answer data is a 3D EDM by a 3D EDM generator, and a recognition step for recognizing a sign language or a gesture based on the 3D EDM.Type: GrantFiled: July 30, 2020Date of Patent: August 29, 2023Assignee: Korea Electronics Technology InstituteInventors: Sang Ki Ko, Hye Dong Jung, Han Mu Park, Chang Jo Kim
-
Patent number: 11482134Abstract: Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.Type: GrantFiled: August 8, 2019Date of Patent: October 25, 2022Assignee: Korea Electronics Technology InstituteInventors: Hye Dong Jung, Sang Ki Ko, Han Mu Park, Chang Jo Kim
-
Patent number: 11386292Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.Type: GrantFiled: September 10, 2020Date of Patent: July 12, 2022Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Bo Eun Kim, Hye Dong Jung
-
Publication number: 20210117723Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.Type: ApplicationFiled: September 10, 2020Publication date: April 22, 2021Inventors: Bo Eun KIM, Hye Dong JUNG
-
Patent number: 10978049Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.Type: GrantFiled: January 24, 2019Date of Patent: April 13, 2021Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Young Han Lee, Jong Yeol Yang, Choong Sang Cho, Hye Dong Jung
-
Patent number: 10923106Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.Type: GrantFiled: January 24, 2019Date of Patent: February 16, 2021Assignee: Korea Electronics Technology InstituteInventors: Jong Yeol Yang, Young Han Lee, Choong Sang Cho, Hye Dong Jung
-
Publication number: 20210043110Abstract: Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.Type: ApplicationFiled: August 8, 2019Publication date: February 11, 2021Inventors: Hye Dong JUNG, Sang Ki KO, Han Mu PARK, Chang Jo KIM
-
Publication number: 20210034846Abstract: A method and apparatus for recognizing a sign language or a gesture by using a three-dimensional (3D) Euclidean distance matrix (EDM) are disclosed. The method includes a two-dimensional (2D) EDM generation step for generating a 2D EDM including information about distances between feature points of a body recognized in image information by a 2D EDM generator, a 3D EDM generation step for receiving the 2D EDM and generating a 3D EDM by using a first deep learning neural network trained with training data in which input data is a 2D EDM and correct answer data is a 3D EDM by a 3D EDM generator, and a recognition step for recognizing a sign language or a gesture based on the 3D EDM.Type: ApplicationFiled: July 30, 2020Publication date: February 4, 2021Applicant: Korea Electronics Technology InstituteInventors: Sang Ki KO, Hye Dong JUNG, Han Mu PARK, Chang Jo KIM
-
Patent number: 10846568Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.Type: GrantFiled: October 1, 2018Date of Patent: November 24, 2020Assignee: Korea Electronics Technology InstituteInventors: Sang Ki Ko, Choong Sang Cho, Hye Dong Jung, Young Han Lee
-
Patent number: 10726289Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.Type: GrantFiled: July 24, 2018Date of Patent: July 28, 2020Assignee: Korea Electronics Technology InstituteInventors: Bo Eun Kim, Choong Sang Cho, Hye Dong Jung, Young Han Lee
-
Publication number: 20200043465Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.Type: ApplicationFiled: January 24, 2019Publication date: February 6, 2020Applicant: Korea Electronics Technology InstituteInventors: Jong Yeol YANG, Young Han LEE, Choong Sang CHO, Hye Dong JUNG
-
Publication number: 20200043473Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.Type: ApplicationFiled: January 24, 2019Publication date: February 6, 2020Applicant: Korea Electronics Technology InstituteInventors: Young Han LEE, Jong Yeol YANG, Choong Sang CHO, Hye Dong JUNG
-
Publication number: 20200005086Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.Type: ApplicationFiled: October 1, 2018Publication date: January 2, 2020Applicant: Korea Electronics Technology InstituteInventors: Sang Ki KO, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
-
Publication number: 20190286931Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.Type: ApplicationFiled: July 24, 2018Publication date: September 19, 2019Applicant: Korea Electronics Technology InstituteInventors: Bo Eun KIM, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
-
Patent number: 10230615Abstract: A method for optimizing network performance according to an embodiment of the present invention includes initializing a size of test data for network performance measurement, performing a test on the network performance by transmitting the test data to each of a first communication protocol and a second communication protocol, repeatedly performing the test, when the size of the test data is increased and then the increased size of the test data is a preset size or smaller based on a comparison between the increased size of the test data and the preset size, and setting a threshold value having a data size being a reference of switching between the first communication protocol and the second communication protocol, based on data collected through the performing of the test, when the increased size of the test data is larger than the preset size.Type: GrantFiled: October 21, 2016Date of Patent: March 12, 2019Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventor: Hye Dong Jung
-
Patent number: 9811501Abstract: A local processing apparatus and a data transceiving method thereof are provided. The local processing apparatus includes a communication module configured to transceive the data with the one or more distributed storage units, a memory configured to store a program for transceiving the data and the one or more key-value data pairs, and a processor configured to execute the program, the processor confirms whether a first key-value data exists in the memory by executing the program, and determines whether to prefetch one or more key-value data corresponding to the first key-value data based on the confirmation result.Type: GrantFiled: October 29, 2015Date of Patent: November 7, 2017Assignee: Korea Electronics Technology InstituteInventors: Bong Jae Kim, Hye Dong Jung
-
Publication number: 20170118107Abstract: A method for optimizing network performance according to an embodiment of the present invention includes initializing a size of test data for network performance measurement, performing a test on the network performance by transmitting the test data to each of a first communication protocol and a second communication protocol, repeatedly performing the test, when the size of the test data is increased and then the increased size of the test data is a preset size or smaller based on a comparison between the increased size of the test data and the preset size, and setting a threshold value having a data size being a reference of switching between the first communication protocol and the second communication protocol, based on data collected through the performing of the test, when the increased size of the test data is larger than the preset size.Type: ApplicationFiled: October 21, 2016Publication date: April 27, 2017Inventor: Hye Dong Jung
-
Publication number: 20170116152Abstract: A local processing apparatus and a data transceiving method thereof are provided. The local processing apparatus includes a communication module configured to transceive the data with the one or more distributed storage units, a memory configured to store a program for transceiving the data and the one or more key-value data pairs, and a processor configured to execute the program, the processor confirms whether a first key-value data exists in the memory by executing the program, and determines whether to prefetch one or more key-value data corresponding to the first key-value data based on the confirmation result.Type: ApplicationFiled: October 29, 2015Publication date: April 27, 2017Applicant: Korea Electronics Technology InstituteInventors: Bong Jae KIM, Hye Dong JUNG
-
Publication number: 20150293786Abstract: A method for processing a CR algorithm by actively utilizing a shared memory of a multi-processor, and a processor using the same are provided. A processor includes: a first multi-processor configured to process a first group of elements of a matrix in accordance with an algorithm; a second multi-processor configured to process a second group of the elements of the matrix in accordance with the algorithm; and a third multi-processor configured to process a third group which comprises some of the elements of the first group, some of the elements of the second group, and some of the elements which are not comprised in the first group and the second group, in accordance with the algorithm. Accordingly, a TDM having many elements can be calculated fast.Type: ApplicationFiled: December 9, 2014Publication date: October 15, 2015Inventors: Hye Dong JUNG, Jae Gi SON
-
Patent number: 8654066Abstract: Provided are a display apparatus and a method for controlling a backlight. A display apparatus including a backlight partitioned into a plurality of sections according to an exemplary embodiment of the present invention includes: an external brightness measurer measuring and providing front brightness values of the display apparatus corresponding to the sections; an image signal analyzer analyzing an inputted image signal and calculating and providing a brightness influence value of each section by adjacent sections; and a control signal corrector converting a source backlight control signal of each section corresponding to the image signal into an intermediate backlight control signal of each section on the basis of the front brightness values and the brightness influence values of the sections and comparing the intermediate backlight control signal with the previous final backlight control signal to generate the current final backlight control signal.Type: GrantFiled: December 23, 2010Date of Patent: February 18, 2014Assignee: Korea Electronics Technology InstituteInventors: Hye Dong Jung, Hyung Su Lee