Patents by Inventor Ig-Jae Kim

Ig-Jae Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140129989
    Abstract: One or more embodiments of the present invention relate to an apparatus and method for generating a cognitive avatar, and according to one or more of the above embodiments of the present invention, the process of allowing the user to select images, which are recognized as similar, from face images of various impressions which are classified as a plurality of impression groups and are stored, is repeatedly performed, and an avatar, which corresponds to the target face which the user intends to generate as the avatar, by a cognitive approach based on the repeatedly performed user's selection, so that a natural avatar, which is similar to the target face, may be expressed without a separate analysis or re-analysis process for the target face.
    Type: Application
    Filed: November 7, 2013
    Publication date: May 8, 2014
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Ig Jae KIM, A Rim LEE
  • Publication number: 20130321620
    Abstract: Provided is an apparatus for recognizing object material. The apparatus includes: an imaging camera unit for capturing spatial image including various objects in a space; an exploring radar unit sending an incident wave to the objects and receiving spatial radar information including a surface reflected wave from a surface of each of the objects and an internal reflected wave from the inside of each of the objects; an information storage unit for storing reference physical property information corresponding to a material of each object; and a material recognition processor recognizing material information of each object by using the reference physical property information stored in the information storage unit, the spatial image provided by the imaging camera unit, and the spatial radar information provided by the exploring radar unit.
    Type: Application
    Filed: March 13, 2013
    Publication date: December 5, 2013
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Jaewon KIM, Ig Jae KIM, Seung Yeup HYUN, Se Yun KIM
  • Publication number: 20130314320
    Abstract: A method of controlling a three-dimensional virtual cursor (3D) by using a portable electronic device, the method including: sensing at least one of a movement and a touch input of a portable electronic device through a sensor mounted in the portable electronic device; and converting the sensed at least one of the movement and the touch input of the portable electronic device into a cursor control signal for controlling an operation of a cursor in a 3D space to output the cursor control signal. According to the method, a 3D virtual cursor may be conveniently controlled without a location or time limit by using a portable electronic device which a user carries.
    Type: Application
    Filed: December 26, 2012
    Publication date: November 28, 2013
    Inventors: Jae In HWANG, Ig Jae KIM, Sang Chul AHN, Heedong KO
  • Publication number: 20130297205
    Abstract: The system for indoor navigation includes a global coordinate generator for dividing an indoor space, extracting a feature point where images photographed by cameras installed at predetermined locations overlap, and generating a global coordinate of the feature point, a database for storing information about the image and global coordinate of the feature point generated by the global coordinate generator, and a mobile device for extracting a feature point from an image of a surrounding environment photographed by a camera thereof, comparing the extracted feature point with the feature point stored in the database, and estimating location and direction thereof by using a global coordinate of a coincident feature point.
    Type: Application
    Filed: January 8, 2013
    Publication date: November 7, 2013
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Ig Jae KIM, Hee Dong KO, Jong Weon LEE
  • Publication number: 20130261891
    Abstract: Provided are a method and apparatus for projecting driving information in front of a vehicle with respect to a destination. The method for projecting driving information in front of the vehicle with respect to the destination may include obtaining driving information and controlling a driving information projecting apparatus on the basis of the obtained driving information. The driving information projecting apparatus may include a driving information display unit displaying the obtained driving information and a lighting unit used as a light source 100 for the driving information display unit.
    Type: Application
    Filed: March 15, 2013
    Publication date: October 3, 2013
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Ig Jae KIM, Jaewon KIM
  • Publication number: 20130235033
    Abstract: The present disclosure relates to a three-dimensional montage generation system and method based on a two-dimensional single image. An embodiment of the present disclosure may generate a three-dimensional montage in an easy, fast and accurate way by using a two-dimensional front face image data, and estimate face portions, which cannot be restored by using a single photograph, in a statistic way by using a previously prepared face database. Accordingly, an embodiment of the present disclosure may generate a three-dimensional personal model from a single two-dimensional front face photograph, and depth information such as nose height, lip protrusion and eye contour may be effectively estimated by means of statistical distribution and correlation of data.
    Type: Application
    Filed: March 8, 2013
    Publication date: September 12, 2013
    Inventors: Ig Jae KIM, Yu Jin HONG
  • Publication number: 20130202162
    Abstract: A method of reconstructing a three-dimensional (3D) facial shape with super resolution even from a short moving picture having a front facial image by acquiring a super-resolution facial image by applying, as a weighting factor, a per-unit-patch similarity between a target frame and frames remaining after excluding the target frame from among a plurality of continuous frames including the front facial image, and reconstructing the 3D facial shape based on the acquired super-resolution facial image.
    Type: Application
    Filed: February 1, 2013
    Publication date: August 8, 2013
    Applicant: Korea Institute of Science and Technology
    Inventors: Ig Jae KIM, Jaewon KIM, Sang Chul AHN
  • Publication number: 20130120450
    Abstract: A method of providing an augmented reality tour platform service for the inside of a building by using a wireless communication device. The method includes: acquiring an image of the building from the wireless communication device; collecting information associated with the acquired image; extracting a candidate building group from a previously established database on the basis of the acquired image and the collected information; specifying a building matching the acquired image from among the extracted candidate building group; and transmitting information regarding the inside of the specified building to the wireless communication device.
    Type: Application
    Filed: November 14, 2012
    Publication date: May 16, 2013
    Inventors: Ig Jae KIM, Jaewon KIM, Heedong KO
  • Publication number: 20130044962
    Abstract: A method and system for reconstructing an image displayed on an electronic device connected to a network, to be a high resolution image. The method of reconstructing a selected area of the image displayed on the electronic device connected to a network, to be a high resolution image, includes: receiving a request to expand the selected area; collecting images including the selected area from the Internet; correcting the selected area to have a high resolution while expanding the selected area based on the collected images; and displaying the image expanded to have a high resolution on the electronic device.
    Type: Application
    Filed: July 10, 2012
    Publication date: February 21, 2013
    Inventors: Jaewon KIM, Ig Jae Kim, Sang Chul Ahn, Jong-Ho Lee
  • Patent number: 8320706
    Abstract: Provided are a method and an apparatus for tagging a photograph with information. The method of tagging a photograph with information, which calculates a shooting position of an input image from reference images having shooting position information to tag the shooting position information, includes: selecting a plurality of reference images; calculating a relative shooting position of the input image to the shooting positions of the reference images; calculating the shooting position of the input image on the basis of the calculation result of the calculating; and storing the shooting position and shooting direction information on the input image in an exchangeable image file format (EXIF) tag of the input image.
    Type: Grant
    Filed: June 11, 2009
    Date of Patent: November 27, 2012
    Assignee: Korea Institute of Science and Technology
    Inventors: Ig Jae Kim, Hyoung Gon Kim, Sang Chul Ahn
  • Publication number: 20120249743
    Abstract: A method that highlights a depth-of-field (DOF) region of an image and performs additional image processing by using the DOF region. The method includes: obtaining a first pattern image and a second pattern image that are captured by emitting light according to different patterns from an illumination device; detecting a DOF region by using the first pattern image and the second pattern image; determining weights to highlight the DOF region; and generating the highlighted DOF image by applying the weights to a combined image of the first pattern image and the second pattern image.
    Type: Application
    Filed: April 2, 2012
    Publication date: October 4, 2012
    Applicant: Korea Institute of Science and Technology
    Inventors: Jaewon KIM, Ig Jae KIM, Sang Chul AHN
  • Patent number: 8180638
    Abstract: Disclosed herein is a method for emotion recognition based on a minimum classification error. In the method, a speaker's neutral emotion is extracted using a Gaussian mixture model (GMM), other emotions except the neutral emotion are classified using the Gaussian Mixture Model to which a discriminative weight for minimizing the loss function of a classification error for the feature vector for emotion recognition is applied. In the emotion recognition, the emotion recognition is performed by applying a discriminative weight evaluated using the Gaussian Mixture Model based on minimum classification error to feature vectors of the emotion classified with difficult, thereby enhancing the performance of emotion recognition.
    Type: Grant
    Filed: February 23, 2010
    Date of Patent: May 15, 2012
    Assignee: Korea Institute of Science and Technology
    Inventors: Hyoung Gon Kim, Ig Jae Kim, Joon-Hyuk Chang, Kye Hwan Lee, Chang Seok Bae
  • Patent number: 7978075
    Abstract: Provided is an apparatus for recognizing an activity of daily living (ADL). The apparatus includes a radio frequency identification (RFID) reader for reading the information of an RFID tag to recognize a motion object, a motion detector attached on a moving subject for acquiring acceleration information and recognizing a motion characteristic, and a controller for receiving information on the motion object from the RFID reader and information on the motion characteristic from the motion detector and then recognizing an ADL.
    Type: Grant
    Filed: June 6, 2008
    Date of Patent: July 12, 2011
    Assignees: Korea Institute of Science and Technology, Electronics and Telecommunications Research Institute
    Inventors: Ig Jae Kim, Hyoung Gon Kim, Sang Chul Ahn
  • Publication number: 20100217595
    Abstract: Disclosed herein is a method for emotion recognition based on a minimum classification error. In the method, a speaker's neutral emotion is extracted using a Gaussian mixture model (GMM), other emotions except the neutral emotion are classified using the Gaussian Mixture Model to which a discriminative weight for minimizing the loss function of a classification error for the feature vector for emotion recognition is applied. In the emotion recognition, the emotion recognition is performed by applying a discriminative weight evaluated using the Gaussian Mixture Model based on minimum classification error to feature vectors of the emotion classified with difficult, thereby enhancing the performance of emotion recognition.
    Type: Application
    Filed: February 23, 2010
    Publication date: August 26, 2010
    Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, Electronics and Telecommunications Research Institute
    Inventors: Hyoung Gon KIM, Ig Jae KIM, Joon-Hyuk CHANG, Kye Hwan LEE, Chang Seok BAE
  • Publication number: 20100191155
    Abstract: An apparatus for calculating calorie balance based on an activity classification, disclosed herein, includes a calculation part calculating characteristic values of acceleration and a user's calorie expenditure from the user's activities, and calculating food data and the user's calorie intake from foods taken by the user; and a recognition part recognizing the user's activities based on the characteristic values of acceleration, and recognizing the foods based on the food data. The characteristic values of acceleration are extracted from acceleration data of acceleration sensors, which determine the user's activities, and include information on the relationship between the acceleration data and the user's activities. The calculation part calculates calorie balance, using the user's calorie expenditure and the user's calorie intake.
    Type: Application
    Filed: August 4, 2009
    Publication date: July 29, 2010
    Applicant: Korea Institute of Science and Technology
    Inventors: Ig-Jae KIM, Hyoung Gon KIM, Sang Chul AHN
  • Publication number: 20100141795
    Abstract: Provided are a method and an apparatus for tagging a photograph with information. The method of tagging a photograph with information, which calculates a shooting position of an input image from reference images having shooting position information to tag the shooting position information, includes: selecting a plurality of reference images; calculating a relative shooting position of the input image to the shooting positions of the reference images; calculating the shooting position of the input image on the basis of the calculation result of the calculating; and storing the shooting position and shooting direction information on the input image in an exchangeable image file format (EXIF) tag of the input image.
    Type: Application
    Filed: June 11, 2009
    Publication date: June 10, 2010
    Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Ig Jae Kim, Hyoung Gon Kim, Sang Chul Ahn
  • Publication number: 20100057455
    Abstract: A method for generating three-dimensional speech animation is provided using data-driven and machine learning approaches. It utilizes the most relevant part of the captured utterances for the synthesis of input phoneme sequences. If highly relevant data are missing or lacking, then it utilizes less relevant (but more abundant) data and relies more heavily on machine learning for the lip-synch generation.
    Type: Application
    Filed: August 26, 2008
    Publication date: March 4, 2010
    Inventors: Ig-Jae Kim, Hyeong-Seok Ko
  • Publication number: 20090179739
    Abstract: Provided is an apparatus for recognizing an activity of daily living (ADL). The apparatus includes a radio frequency identification (RFID) reader for reading the information of an RFID tag to recognize a motion object, a motion detector attached on a moving subject for acquiring acceleration information and recognizing a motion characteristic, and a controller for receiving information on the motion object from the RFID reader and information on the motion characteristic from the motion detector and then recognizing an ADL.
    Type: Application
    Filed: June 6, 2008
    Publication date: July 16, 2009
    Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, Electronics and Telecommunications Research Institute
    Inventors: Ig Jae KIM, Hyoung Gon KIM, Sang Chul AHN
  • Patent number: 7535472
    Abstract: In blendshape-based facial animation, two main approaches are used to create the key expressions: manual sculpting and statistically-based techniques. Hand-generated expressions have the advantage of being intuitively recognizable, thus allowing animators to use conventional keyframe control. However, they may cover only a fraction of the expression space, resulting in large reproduction animation errors. On the other hand, statistically-based techniques produce eigenfaces that give minimal reproduction errors but are visually non-intuitive. In the invention the applicants propose a technique to convert a given set of hand-generated key expressions into another set of so-called quasi-eigen faces. The resulting expressions resemble the original hand-generated expressions, but have expression space coverages more like those of statistically generated expression bases. The effectiveness of the proposed technique is demonstrated by applying it to hand-generated expressions.
    Type: Grant
    Filed: April 5, 2006
    Date of Patent: May 19, 2009
    Assignee: Seoul National University Industry Foundation
    Inventors: Ig-Jae Kim, Hyeong-Seok Ko
  • Patent number: 7468742
    Abstract: The present invention discloses an interactive presentation system which allows a presenter to perform a presentation while having various interactions directly with presentation material images in real time through a gesture or/and voice. The interactive presentation system comprises: an active infrared camera; a command recognition system connected to the active infrared camera; and an image synthesis system connected to the active infrared camera and the command recognition system. The presentation system may further comprises a stereo camera set for properly synthesizing a presenter in a 3D image and a 3D motion system. By this configuration, it is possible to embody an interactive presentation system in which a command through a presenter's gesture or voice is processed in real time and the image of the presenter is synthesized in a presentation material screen in real time, and accordingly the audiovisual effect is maximized.
    Type: Grant
    Filed: December 2, 2004
    Date of Patent: December 23, 2008
    Assignee: Korea Institute of Science and Technology
    Inventors: Sang-Chul Ahn, Hyoung-Gon Kim, Ig-Jae Kim, Chang-Sik Hwang