Patents by Inventor Ju Hong YOON
Ju Hong YOON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250148270Abstract: There is provided a training method of a multi-task integrated deep learning model. A multi-task integrated deep learning model training method according to an embodiment may generate training data for a plurality of visual intelligence tasks from visual data in a batch, and may train a multi-task integrated deep learning model which performs a plurality of visual intelligence tasks by using the generated training data. Accordingly, training data for training an integrated deep learning model which performs various visual intelligence tasks is generated in a batch through multi-data conversion kernels, so that appropriate training data for performing multiple tasks may be easily obtained and effective training of a multi-task integrated deep learning model is possible.Type: ApplicationFiled: April 24, 2024Publication date: May 8, 2025Applicant: Korea Electronics Technology InstituteInventors: Choong Sang CHO, Gui Sik KIM, Ju Hong YOON, Young Han LEE
-
Publication number: 20250104344Abstract: There are provided an apparatus and a method for reconstructing a 3D human object in real time based on a monocular color image. A 3D human object reconstruction apparatus according to an embodiment extracts a pixel-aligned feature from a monocular image, extracts a ray-invariant feature from the pixel-aligned feature, generates encoded position information by encoding position information of a point, predicts a SD of a point from the ray-invariant feature and the encoded position information which are extracted, and reconstructs a 3D human object by using the predicted SD. Accordingly, the ray-invariant feature extracted from the pixel-aligned feature, and the encoded position information are used, so that an amount of computation for predicting SDs of points of a 3D space can be noticeably reduced and a speed can be remarkably enhanced.Type: ApplicationFiled: December 13, 2022Publication date: March 27, 2025Applicant: Korea Electronics Technology InstituteInventors: Min Gyu PARK, Ju Hong YOON, Ju Mi KANG, Je Woo KIM, Yong Hoon KWON
-
Publication number: 20240394546Abstract: There is provided a learning method and system of a backbone network for visual intelligence based on self-supervised learning and multi-head. A network learning system according to an embodiment generates a plurality of first modified vectors by modifying a first feature vector outputted from a teacher network, generates a plurality of second modified vectors by modifying a second feature vector outputted from a student network, calculates a loss by using the first modified vectors and the second modified vectors, and optimizes parameters of the student network. Accordingly, the effect of learning by knowledge distillation may be enhanced by training the backbone network for visual intelligence like group learning is performed by various teacher networks and student networks.Type: ApplicationFiled: July 24, 2023Publication date: November 28, 2024Applicant: Korea Electronics Technology InstituteInventors: Choong Sang CHO, Young Han LEE, Ju Hong YOON, Gui Sik KIM
-
Publication number: 20240212267Abstract: There are provided an apparatus and a method for reconstructing a 3D human object based on a monocular image through depth image-based implicit function learning. A 3D human object reconstruction method according to an embodiment includes: predicting a double-sided orthographic depth map from a front perspective color image of a human object; predicting a signed distance (SD) regarding points on a 3D space from the predicted double-sided orthographic depth map; and reconstructing a 3D human object by using the predicted SD. Accordingly, a human object and details can be naturally reconstructed with respect to not only an area visible through a front perspective color image of the human object but also an invisible area.Type: ApplicationFiled: December 14, 2023Publication date: June 27, 2024Applicant: Korea Electronics Technology InstituteInventors: Min Gyu PARK, Ju Hong YOON, Ju Mi KANG, Je Woo KIM
-
Publication number: 20240202951Abstract: There is provided a depth estimation method for a small baseline-stereo camera through LiDAR sensor fusion. A depth map estimation method according to an embodiment may estimate a high-resolution depth map from a small baseline-stereo image based on deep learning, by using transfer learning from a deep learning network that is trained to estimate a depth map from a wide baseline-stereo image. Accordingly, in a device which has a small baseline-stereo camera installed therein due to structural constraints, such as a smartphone, a wearable AR/VR device, a drone, 3D image quality can be enhanced. In addition, according to embodiments, pseudo-LiDAR data may be generated by using a depth map estimated from a small baseline-stereo image, and may be used for replacing or reinforcing LiDAR data.Type: ApplicationFiled: December 14, 2023Publication date: June 20, 2024Applicant: Korea Electronics Technology InstituteInventors: Min Gyu PARK, Ju Hong YOON, Min Ho LEE, Je Woo KIM
-
Publication number: 20240203161Abstract: There is provided an emotion prediction method based on virtual facial expression image augmentation. The emotion prediction method may acquire a user facial image, may extract a facial expression feature from the acquired user facial image, and may predict a user emotion from the extracted facial expression feature. The emotion prediction method may extract the facial expression feature by using a facial expression recognition network, the facial expression recognition network being an AI model that is trained to receive a user facial image and to extract a facial expression feature. The facial expression recognition network is retrained with virtual facial images which are augmented from a facial image that causes a failure in emotion recognition. Accordingly, by augmenting features of a facial expression image that causes a failure in prediction through error feedback, facial expression recognition performance can be enhanced.Type: ApplicationFiled: December 11, 2023Publication date: June 20, 2024Applicant: Korea Electronics Technology InstituteInventors: Ju Hong YOON, Min Gyu PARK, Yong Hoon KWON, Je Woo KIM
-
Publication number: 20240062522Abstract: There is provided a self-directed visual intelligence system, The self-directed visual intelligence system according to an embodiment prepares data necessary for training a visual intelligence model when a change in a visual context of a real world is recognized, configures a visual intelligence model and configures training data of the visual intelligence model, based on the changed visual context of the real world, trains the configured visual intelligence model with the training data, and evaluates performance of the trained visual intelligence model. Accordingly, the visual intelligence model is corrected/improved in a self-directed way according to a change in a visual context of a real world, and is grown/advanced by itself, so that performance of the visual intelligence model is maintained in a best state even in response to any change in the context of the real world.Type: ApplicationFiled: October 19, 2022Publication date: February 22, 2024Applicant: Korea Electronics Technology InstituteInventors: Choong Sang CHO, Ju Hong YOON, Young Han LEE
-
Patent number: 11847837Abstract: A method and an apparatus for detecting a lane is provided. The lane detection apparatus according to an embodiment includes: an acquisition unit configured to acquire a front image of a vehicle; and a processor configured to input the image acquired through the acquisition unit to an AI model, and to detect information of a lane on a road, and the AI model is trained to detect lane information that is expressed in a plane form from an input image. Accordingly, data imbalance between a lane area and a non-lane area can be solved by using the AI model which learns/predicts lane information that is expressed in a plane form, not in a segment form such as a straight line or curved line.Type: GrantFiled: December 30, 2020Date of Patent: December 19, 2023Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTEInventors: Min Gyu Park, Ju Hong Yoon, Je Woo Kim
-
Publication number: 20210334553Abstract: A method and an apparatus for detecting a lane is provided. The lane detection apparatus according to an embodiment includes: an acquisition unit configured to acquire a front image of a vehicle; and a processor configured to input the image acquired through the acquisition unit to an AI model, and to detect information of a lane on a road, and the AI model is trained to detect lane information that is expressed in a plane form from an input image. Accordingly, data imbalance between a lane area and a non-lane area can be solved by using the AI model which learns/predicts lane information that is expressed in a plane form, not in a segment form such as a straight line or curved line.Type: ApplicationFiled: December 30, 2020Publication date: October 28, 2021Inventors: Min Gyu PARK, Ju Hong YOON, Je Woo KIM
-
Patent number: 10966599Abstract: An endoscopic stereo matching method and apparatus using a direct attenuation model is provided. A method for generating a depth image according to an embodiment includes: generating a stereo image; and estimating a depth image from the stereo image based on an attenuation trend of light of an illumination used to generate the stereo image. Accordingly, a dense depth image can be obtained by using images obtained from a capsule endoscope, and thus the geometric structure of the inside of the GI tract can be estimated.Type: GrantFiled: May 29, 2019Date of Patent: April 6, 2021Assignee: Korea Electronics Technology InstituteInventors: Min Gyu Park, Young Bae Hwang, Ju Hong Yoon
-
Publication number: 20190365213Abstract: An endoscopic stereo matching method and apparatus using a direct attenuation model is provided. A method for generating a depth image according to an embodiment includes: generating a stereo image; and estimating a depth image from the stereo image based on an attenuation trend of light of an illumination used to generate the stereo image. Accordingly, a dense depth image can be obtained by using images obtained from a capsule endoscope, and thus the geometric structure of the inside of the GI tract can be estimated.Type: ApplicationFiled: May 29, 2019Publication date: December 5, 2019Applicant: Korea Electronics Technology InstituteInventors: Min Gyu PARK, Young Bae HWANG, Ju Hong YOON