Patents by Inventor Hye-Dong Jung
Hye-Dong Jung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240193376Abstract: There is provided a customized personality agent system evolving according to a satisfaction of a user. An interactive service providing method according to an embodiment provides an interactive AI service to a user by using an agent that is selected from a plurality of agents based on a state of personality of the user, and evaluates a satisfaction of the user and trains the agent that provides the interactive service. Accordingly, by searching an agent that has an optimal personality suited to a state of personality of a user and providing an interactive AI service, service quality may be enhanced. Also, by rewarding and training an agent that provides a service based on a satisfaction of a user who receives the service, the personality of the agent may evolve to be well suited to a personality of the user.Type: ApplicationFiled: December 12, 2023Publication date: June 13, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Woong YOO, Hye Dong JUNG, Mi Ra LEE
-
Publication number: 20240193920Abstract: There is provided a method for predicting a user personality by mapping multimodal information on a personality expression space. A personality prediction method according to an embodiment extracts a multimodal feature from an input image in which a user appears, maps the extracted multimodal feature on a personality expression space, and predicts a personality of the user based on a result of mapping. Accordingly, a personality of a user may be more exactly predicted through establishment of a correlation between user's various behavior characteristics and personalities.Type: ApplicationFiled: December 12, 2023Publication date: June 13, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Woong YOO, Mi Ra LEE, Hye Dong JUNG
-
Publication number: 20240193436Abstract: There is provided a user personality prediction method using pre-obtained personality indicators and time-series information. According to an embodiment, a personality prediction method may acquire personality indicators representing personalities of a user, may acquiring external features of the user as time-series data, may train a personality prediction model with correlations between the acquired external features and the personality indicators, and may predict personality indicators of the user from the external features of the user by using the trained personality prediction model. Accordingly, a personality of a user is predicted in real time based on external features extracted in real time, and hence, personality prediction may be performed flexibly in response to a subtle change in AU intensities acquired as time-series data.Type: ApplicationFiled: December 12, 2023Publication date: June 13, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Woong YOO, Hye Dong JUNG, Mi Ra LEE
-
Publication number: 20240193969Abstract: There is provided a method for creating multimodal training datasets for predicting characteristics of a user by using pseudo-labeling. According to an embodiment, the method may acquire a labelled dataset in which an image of a user is labelled with personality information and may extract a multimodal feature vector from the image of the acquired labelled dataset, may acquire an un-labelled dataset in which an image of a user is not labelled with personality information and may extract a multimodal feature vector from the image of the acquired un-labelled dataset, may measure a similarity between the extracted multimodal feature vector of the labelled dataset and the multimodal feature vector of the un-labelled dataset, and may label the un-labelled dataset based on the measured similarity. Accordingly, by creating multimodal training datasets for predicting a user personality by using pseudo-labeling, training datasets may be obtained rapidly, economically and effectively.Type: ApplicationFiled: December 12, 2023Publication date: June 13, 2024Applicant: Korea Electronics Technology InstituteInventors: Jae Woong YOO, Mi Ra LEE, Hye Dong JUNG
-
Patent number: 11741755Abstract: A method and apparatus for recognizing a sign language or a gesture by using a three-dimensional (3D) Euclidean distance matrix (EDM) are disclosed. The method includes a two-dimensional (2D) EDM generation step for generating a 2D EDM including information about distances between feature points of a body recognized in image information by a 2D EDM generator, a 3D EDM generation step for receiving the 2D EDM and generating a 3D EDM by using a first deep learning neural network trained with training data in which input data is a 2D EDM and correct answer data is a 3D EDM by a 3D EDM generator, and a recognition step for recognizing a sign language or a gesture based on the 3D EDM.Type: GrantFiled: July 30, 2020Date of Patent: August 29, 2023Assignee: Korea Electronics Technology InstituteInventors: Sang Ki Ko, Hye Dong Jung, Han Mu Park, Chang Jo Kim
-
Patent number: 11482134Abstract: Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.Type: GrantFiled: August 8, 2019Date of Patent: October 25, 2022Assignee: Korea Electronics Technology InstituteInventors: Hye Dong Jung, Sang Ki Ko, Han Mu Park, Chang Jo Kim
-
Patent number: 11386292Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.Type: GrantFiled: September 10, 2020Date of Patent: July 12, 2022Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Bo Eun Kim, Hye Dong Jung
-
Publication number: 20210117723Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.Type: ApplicationFiled: September 10, 2020Publication date: April 22, 2021Inventors: Bo Eun KIM, Hye Dong JUNG
-
Patent number: 10978049Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.Type: GrantFiled: January 24, 2019Date of Patent: April 13, 2021Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Young Han Lee, Jong Yeol Yang, Choong Sang Cho, Hye Dong Jung
-
Patent number: 10923106Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.Type: GrantFiled: January 24, 2019Date of Patent: February 16, 2021Assignee: Korea Electronics Technology InstituteInventors: Jong Yeol Yang, Young Han Lee, Choong Sang Cho, Hye Dong Jung
-
Publication number: 20210043110Abstract: Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.Type: ApplicationFiled: August 8, 2019Publication date: February 11, 2021Inventors: Hye Dong JUNG, Sang Ki KO, Han Mu PARK, Chang Jo KIM
-
Publication number: 20210034846Abstract: A method and apparatus for recognizing a sign language or a gesture by using a three-dimensional (3D) Euclidean distance matrix (EDM) are disclosed. The method includes a two-dimensional (2D) EDM generation step for generating a 2D EDM including information about distances between feature points of a body recognized in image information by a 2D EDM generator, a 3D EDM generation step for receiving the 2D EDM and generating a 3D EDM by using a first deep learning neural network trained with training data in which input data is a 2D EDM and correct answer data is a 3D EDM by a 3D EDM generator, and a recognition step for recognizing a sign language or a gesture based on the 3D EDM.Type: ApplicationFiled: July 30, 2020Publication date: February 4, 2021Applicant: Korea Electronics Technology InstituteInventors: Sang Ki KO, Hye Dong JUNG, Han Mu PARK, Chang Jo KIM
-
Patent number: 10846568Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.Type: GrantFiled: October 1, 2018Date of Patent: November 24, 2020Assignee: Korea Electronics Technology InstituteInventors: Sang Ki Ko, Choong Sang Cho, Hye Dong Jung, Young Han Lee
-
Patent number: 10726289Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.Type: GrantFiled: July 24, 2018Date of Patent: July 28, 2020Assignee: Korea Electronics Technology InstituteInventors: Bo Eun Kim, Choong Sang Cho, Hye Dong Jung, Young Han Lee
-
Publication number: 20200043465Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.Type: ApplicationFiled: January 24, 2019Publication date: February 6, 2020Applicant: Korea Electronics Technology InstituteInventors: Jong Yeol YANG, Young Han LEE, Choong Sang CHO, Hye Dong JUNG
-
Publication number: 20200043473Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.Type: ApplicationFiled: January 24, 2019Publication date: February 6, 2020Applicant: Korea Electronics Technology InstituteInventors: Young Han LEE, Jong Yeol YANG, Choong Sang CHO, Hye Dong JUNG
-
Publication number: 20200005086Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.Type: ApplicationFiled: October 1, 2018Publication date: January 2, 2020Applicant: Korea Electronics Technology InstituteInventors: Sang Ki KO, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
-
Publication number: 20190286931Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.Type: ApplicationFiled: July 24, 2018Publication date: September 19, 2019Applicant: Korea Electronics Technology InstituteInventors: Bo Eun KIM, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
-
Patent number: 10230615Abstract: A method for optimizing network performance according to an embodiment of the present invention includes initializing a size of test data for network performance measurement, performing a test on the network performance by transmitting the test data to each of a first communication protocol and a second communication protocol, repeatedly performing the test, when the size of the test data is increased and then the increased size of the test data is a preset size or smaller based on a comparison between the increased size of the test data and the preset size, and setting a threshold value having a data size being a reference of switching between the first communication protocol and the second communication protocol, based on data collected through the performing of the test, when the increased size of the test data is larger than the preset size.Type: GrantFiled: October 21, 2016Date of Patent: March 12, 2019Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventor: Hye Dong Jung
-
Patent number: 9811501Abstract: A local processing apparatus and a data transceiving method thereof are provided. The local processing apparatus includes a communication module configured to transceive the data with the one or more distributed storage units, a memory configured to store a program for transceiving the data and the one or more key-value data pairs, and a processor configured to execute the program, the processor confirms whether a first key-value data exists in the memory by executing the program, and determines whether to prefetch one or more key-value data corresponding to the first key-value data based on the confirmation result.Type: GrantFiled: October 29, 2015Date of Patent: November 7, 2017Assignee: Korea Electronics Technology InstituteInventors: Bong Jae Kim, Hye Dong Jung