Patents by Inventor Ig-Jae Kim

Ig-Jae Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220190596
    Abstract: Embodiments relate to a method for predicting power generation and remaining useful life to predict operational soundness of a power plant and a system for performing the same, the method including acquiring sensing data from each of a plurality of sensors included in a plurality of systems in the power plant, outputting each of a predicted power generation and a predicted remaining useful life from a measurement value in sensing data of an input sensor through a pre-trained prediction model, assessing the operational soundness in aspect of the power generation and the remaining useful life using a prediction result and a current result in each aspect, and determining the operational soundness of the system based on prediction uncertainty and an assessment result in aspect of the power generation and the remaining useful life.
    Type: Application
    Filed: September 30, 2021
    Publication date: June 16, 2022
    Inventors: Ig Jae KIM, Heeseung CHOI, Yeji CHOI, Jisoo KIM
  • Publication number: 20220120827
    Abstract: Embodiments relate to a method including obtaining m measured values for each field sensor by measuring with respect to a first sensor group including first type of field sensors and a second sensor group including different second type of field sensors, which are attached to the rigid body, at m time steps; and calibrating a sensor frame of the first type of field sensor and a sensor frame of the second type of field sensor by using a correlation between the first type of field sensor and the second type of field sensor based on measured values of at least some of the m time steps, wherein the multiple field sensors include different field sensors of a magnetic field sensor, an acceleration sensor, and a force sensor, and a system therefor.
    Type: Application
    Filed: November 24, 2020
    Publication date: April 21, 2022
    Inventors: Ig Jae KIM, Je Hyeong HONG, Donghoon KANG
  • Publication number: 20220036054
    Abstract: Embodiments relate to a companion animal identification method including acquiring a preview image for capturing a face of a target companion animal, checking if the face of the target companion animal is aligned according to a preset criterion, capturing the face of the target companion animal when it is determined that the face of the target companion animal is aligned, and identifying the target companion animal by extracting features from a face image of the target companion animal having an aligned face view, and an identification system for performing the same.
    Type: Application
    Filed: July 29, 2021
    Publication date: February 3, 2022
    Inventors: Ig Jae KIM, Yu-Jin HONG, Hyeonjung PARK, Minsoo KIM, Ikkyu CHOI
  • Publication number: 20220036066
    Abstract: Disclosed are an X-RAY image reading support method including the steps of acquiring a target X-RAY image photographed by transmitting or reflecting X-RAY in a reading space in which an object to be read is disposed; applying the target X-RAY image to a reading model that extracts features from an input image; and identifying the object to be read as an object corresponding to a classified class when the object to be read is classified as a set class based on a first feature set extracted from the target X-RAY image, and an X-RAY image reading support system performing the method.
    Type: Application
    Filed: November 25, 2020
    Publication date: February 3, 2022
    Inventors: Junghyun CHO, Hyunwoo CHO, Haesol PARK, Ig Jae KIM
  • Publication number: 20210406301
    Abstract: Embodiments relate to a method for determining a search region including acquiring object information of a target object included in an image query, generating a set of non-image features of the target object based on the object information, setting a search candidate region based on a user input, acquiring information associated with the search candidate region from a region database, and determining a search region based on at least one of the information associated with the search candidate region or at least part of the set of non-image features, and a system for performing the same.
    Type: Application
    Filed: November 23, 2020
    Publication date: December 30, 2021
    Inventors: Ig Jae KIM, Heeseung CHOI, Haksub KIM, Seungho CHAE, Yoonsik YANG
  • Publication number: 20210241463
    Abstract: Embodiments relate to a method for supporting X-ray image reading including receiving information associated with a reading target positioned in a reading space where X-rays pass through or are reflected off, acquiring a non X-RAY image of an item object based on the information associated with the reading target, and generating a fake X-RAY image of the item object by applying the non X-RAY image of the item object to the image transform model, and a system for performing the same.
    Type: Application
    Filed: November 24, 2020
    Publication date: August 5, 2021
    Inventors: Junghyun CHO, Ig Jae KIM, Hyunwoo CHO, Haesol PARK
  • Publication number: 20210225013
    Abstract: Embodiments relate to a method for re-identifying a target object based on location information of closed-circuit television (CCTV) and movement information of the target object and a system for performing the same, the method including detecting at least one object of interest in a plurality of source videos based on a preset condition of the object of interest, tracking the identified object of interest on the corresponding source video to generate a tube of the object of interest, receiving an image query including a target patch and location information of the CCTV, determining at least one search candidate area based on the location information of the CCTV and the movement information of the target object, re-identifying if the object of interest seen in the tube of the object of interest is the target object, and providing a user with the tube of the re-identified object of interest.
    Type: Application
    Filed: September 30, 2020
    Publication date: July 22, 2021
    Inventors: Ig Jae KIM, Heeseung CHOI, Haksub KIM, Seungho CHAE, Yoonsik YANG
  • Publication number: 20210125323
    Abstract: Embodiments relate to a method and system for determining a situation of a facility by imaging a sensing data of the facility including receiving sensing data through a plurality of sensors at a query time, generating a situation image at the query time, showing the situation of the facility at the query time based on the sensing data, and determining if an abnormal situation occurred at the query time by applying the situation image to a pre-learned situation determination model.
    Type: Application
    Filed: September 8, 2020
    Publication date: April 29, 2021
    Inventors: Ig Jae KIM, Heeseung CHOI, Hyunki LIM, Yeji CHOI
  • Publication number: 20210081676
    Abstract: Embodiments relate to a method for generating a video synopsis including receiving a user query; performing an object based analysis of a source video; and generating a synopsis video in response to a video synopsis generation request from a user, and a system therefor. The video synopsis generated by the embodiments reflects the user's desired interaction.
    Type: Application
    Filed: August 18, 2020
    Publication date: March 18, 2021
    Inventors: Ig Jae KIM, Heeseung CHOI, Haksub KIM, Yoonsik YANG, Seungho CHAE
  • Publication number: 20210019345
    Abstract: Exemplary embodiments relate to a method for selecting an image of interest to construct a retrieval database including receiving an image captured by an imaging device, detecting an object of interest in the received image, selecting an image of interest based on at least one of complexity of the image in which the object of interest is detected and image quality of the object of interest, and storing information related to the image of interest in the retrieval database, and an image control system performing the same.
    Type: Application
    Filed: February 21, 2020
    Publication date: January 21, 2021
    Inventors: Ig Jae KIM, Heeseung CHOI, Haksub KIM, Seungho CHAE, Yoonsik YANG
  • Publication number: 20200328897
    Abstract: Exemplary embodiments relate to a method for unlocking a mobile device using authentication based on ear recognition including obtaining an image of a target showing at least part of the target's body in a lock state, extracting a set of ear features of the target from the image of the target, when the image of the target includes at least part of the target's ear, and determining if the extracted set of ear features of the target satisfies a preset condition, and a mobile device performing the same.
    Type: Application
    Filed: April 10, 2020
    Publication date: October 15, 2020
    Inventors: Ig Jae KIM, Gi Pyo NAM, Junghyun CHO, Heeseung CHOI
  • Patent number: 10593083
    Abstract: Disclosed is a method for facial age simulation based on an age of each facial part and environmental factors, which includes: measuring an age of each facial part on the basis of an input face image; designating a personal environmental factor; transforming an age of each facial part by applying an age transformation model according to the age of each facial part and the environmental factor; reconstructing the image transformed for each facial part; and composing the reconstructed images to generate an age-transformed face. Accordingly, it is possible to transform a face realistically based on an age measured for each facial part and an environmental factor.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: March 17, 2020
    Assignee: Korea Institute of Science and Technology
    Inventors: Ig Jae Kim, Sung Eun Choi, Sang Chul Ahn
  • Patent number: 10565691
    Abstract: A method of multi-view deblurring for 3-dimensional (3D) shape reconstruction includes: receiving images captured by multiple synchronized cameras at multiple viewpoints; performing iteratively estimation of depth map, latent image, and 3D motion at each viewpoint for the received images; determining whether image deblurring at each viewpoint is completed; and performing 3D reconstruction based on final depth maps and latent images at each viewpoint. Accordingly, it is possible to achieve accurate deblurring and 3D reconstruction even from any motion blurred images.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: February 18, 2020
    Assignee: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Byeongjoo Ahn, Ig Jae Kim, Junghyun Cho
  • Patent number: 10559062
    Abstract: A method for automatic facial impression transformation includes extracting landmark points for elements of a target face whose facial impression is to be transformed as well as distance vectors respectively representing distances of the landmark points, comparing the distance vectors to select a learning data set similar to the target face from a database, extracting landmark points and distance vectors from the learning data set, transforming a local feature of the target face based on the landmark points of the learning data set and score data for a facial impression, and transforming a global feature of the target face based on the distance vectors of the learning data set and the score data for the facial impression. Accordingly, a facial impression may be transformed in various ways while keeping an identity of a corresponding person.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: February 11, 2020
    Assignee: Korea Institute of Science and Technology
    Inventors: Ig Jae Kim, Heeseung Choi, Sungyeon Park, Junghyun Cho
  • Publication number: 20180268207
    Abstract: A method for automatic facial impression transformation includes extracting landmark points for elements of a target face whose facial impression is to be transformed as well as distance vectors respectively representing distances of the landmark points, comparing the distance vectors to select a learning data set similar to the target face from a database, extracting landmark points and distance vectors from the learning data set, transforming a local feature of the target face based on the landmark points of the learning data set and score data for a facial impression, and transforming a global feature of the target face based on the distance vectors of the learning data set and the score data for the facial impression. Accordingly, a facial impression may be transformed in various ways while keeping an identity of a corresponding person.
    Type: Application
    Filed: May 21, 2018
    Publication date: September 20, 2018
    Applicant: Korea Institute of Science and Technology
    Inventors: Ig Jae KIM, Heeseung CHOI, Sungyeon PARK, Junghyun CHO
  • Patent number: 10013741
    Abstract: A video deblurring method based on a layered blur model includes estimating a latent image, an object motion and a mask for each layer in each frame using images consisting of a combination of layers during an exposure time of a camera when receiving a blurred video frame, applying the estimated latent image, object motion and mask for each layer in each frame to the layered blur model to generate a blurry frame, comparing the generated blurry frame and the received blurred video frame, and outputting a final latent image based on the estimated object motion and mask for each layer in each frame, when the generated blurry frame and the received blurred video frame match. Accordingly, by modeling a blurred image as an overlap of images consisting of a combination of foreground and background during exposure, more accurate deblurring results at object boundaries can be obtained.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: July 3, 2018
    Assignee: Korea Institute of Science and Technology
    Inventors: Byeongjoo Ahn, Ig Jae Kim, Junghyun Cho
  • Patent number: 9978119
    Abstract: A method for automatic facial impression transformation includes extracting landmark points for elements of a target face whose facial impression is to be transformed as well as distance vectors respectively representing distances of the landmark points, comparing the distance vectors to select a learning data set similar to the target face from a database, extracting landmark points and distance vectors from the learning data set, transforming a local feature of the target face based on the landmark points of the learning data set and score data for a facial impression, and transforming a global feature of the target face based on the distance vectors of the learning data set and the score data for the facial impression. Accordingly, a facial impression may be transformed in various ways while keeping an identity of a corresponding person.
    Type: Grant
    Filed: March 14, 2016
    Date of Patent: May 22, 2018
    Assignee: Korea Institute of Science and Technology
    Inventors: Ig Jae Kim, Heeseung Choi, Sungyeon Park, Junghyun Cho
  • Publication number: 20180061018
    Abstract: A method of multi-view deblurring for 3-dimensional (3D) shape reconstruction includes: receiving images captured by multiple synchronized cameras at multiple viewpoints; performing iteratively estimation of depth map, latent image, and 3D motion at each viewpoint for the received images; determining whether image deblurring at each viewpoint is completed; and performing 3D reconstruction based on final depth maps and latent images at each viewpoint. Accordingly, it is possible to achieve accurate deblurring and 3D reconstruction even from any motion blurred images.
    Type: Application
    Filed: June 22, 2017
    Publication date: March 1, 2018
    Inventors: Byeongjoo AHN, Ig Jae KIM, Junghyun CHO
  • Patent number: 9830523
    Abstract: Provided is a method and apparatus for recognizing material of objects by extracting physical properties of objects in a camera photo based on the combined analysis of information obtained by a camera and a radar unit.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: November 28, 2017
    Assignee: Korea Institute of Science and Technology
    Inventors: Jaewon Kim, Ig Jae Kim, Seung Yeup Hyun, Se Yun Kim
  • Patent number: 9811716
    Abstract: A method for face recognition through facial expression normalization includes: fitting an input two-dimensional face image into a three-dimensional face model by using a three-dimensional face database; normalizing the three-dimensional face model into a neutral-expression three-dimensional face model by using a neutral-expression parameter learned from the three-dimensional face database; converting the neutral-expression three-dimensional face model into a neutral-expression two-dimensional face image; and recognizing the neutral-expression two-dimensional face image from a two-dimensional face database. Accordingly, face recognition may be performed with high reliability without a loss of information.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: November 7, 2017
    Assignee: Korea Institute of Science and Technology
    Inventors: Ig Jae Kim, Hee Seung Choi, Junghyun Cho