Patents by Inventor Ig-Jae Kim
Ig-Jae Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140129989Abstract: One or more embodiments of the present invention relate to an apparatus and method for generating a cognitive avatar, and according to one or more of the above embodiments of the present invention, the process of allowing the user to select images, which are recognized as similar, from face images of various impressions which are classified as a plurality of impression groups and are stored, is repeatedly performed, and an avatar, which corresponds to the target face which the user intends to generate as the avatar, by a cognitive approach based on the repeatedly performed user's selection, so that a natural avatar, which is similar to the target face, may be expressed without a separate analysis or re-analysis process for the target face.Type: ApplicationFiled: November 7, 2013Publication date: May 8, 2014Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Ig Jae KIM, A Rim LEE
-
Publication number: 20130321620Abstract: Provided is an apparatus for recognizing object material. The apparatus includes: an imaging camera unit for capturing spatial image including various objects in a space; an exploring radar unit sending an incident wave to the objects and receiving spatial radar information including a surface reflected wave from a surface of each of the objects and an internal reflected wave from the inside of each of the objects; an information storage unit for storing reference physical property information corresponding to a material of each object; and a material recognition processor recognizing material information of each object by using the reference physical property information stored in the information storage unit, the spatial image provided by the imaging camera unit, and the spatial radar information provided by the exploring radar unit.Type: ApplicationFiled: March 13, 2013Publication date: December 5, 2013Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Jaewon KIM, Ig Jae KIM, Seung Yeup HYUN, Se Yun KIM
-
Publication number: 20130314320Abstract: A method of controlling a three-dimensional virtual cursor (3D) by using a portable electronic device, the method including: sensing at least one of a movement and a touch input of a portable electronic device through a sensor mounted in the portable electronic device; and converting the sensed at least one of the movement and the touch input of the portable electronic device into a cursor control signal for controlling an operation of a cursor in a 3D space to output the cursor control signal. According to the method, a 3D virtual cursor may be conveniently controlled without a location or time limit by using a portable electronic device which a user carries.Type: ApplicationFiled: December 26, 2012Publication date: November 28, 2013Inventors: Jae In HWANG, Ig Jae KIM, Sang Chul AHN, Heedong KO
-
Publication number: 20130297205Abstract: The system for indoor navigation includes a global coordinate generator for dividing an indoor space, extracting a feature point where images photographed by cameras installed at predetermined locations overlap, and generating a global coordinate of the feature point, a database for storing information about the image and global coordinate of the feature point generated by the global coordinate generator, and a mobile device for extracting a feature point from an image of a surrounding environment photographed by a camera thereof, comparing the extracted feature point with the feature point stored in the database, and estimating location and direction thereof by using a global coordinate of a coincident feature point.Type: ApplicationFiled: January 8, 2013Publication date: November 7, 2013Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Ig Jae KIM, Hee Dong KO, Jong Weon LEE
-
Publication number: 20130261891Abstract: Provided are a method and apparatus for projecting driving information in front of a vehicle with respect to a destination. The method for projecting driving information in front of the vehicle with respect to the destination may include obtaining driving information and controlling a driving information projecting apparatus on the basis of the obtained driving information. The driving information projecting apparatus may include a driving information display unit displaying the obtained driving information and a lighting unit used as a light source 100 for the driving information display unit.Type: ApplicationFiled: March 15, 2013Publication date: October 3, 2013Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Ig Jae KIM, Jaewon KIM
-
Publication number: 20130235033Abstract: The present disclosure relates to a three-dimensional montage generation system and method based on a two-dimensional single image. An embodiment of the present disclosure may generate a three-dimensional montage in an easy, fast and accurate way by using a two-dimensional front face image data, and estimate face portions, which cannot be restored by using a single photograph, in a statistic way by using a previously prepared face database. Accordingly, an embodiment of the present disclosure may generate a three-dimensional personal model from a single two-dimensional front face photograph, and depth information such as nose height, lip protrusion and eye contour may be effectively estimated by means of statistical distribution and correlation of data.Type: ApplicationFiled: March 8, 2013Publication date: September 12, 2013Inventors: Ig Jae KIM, Yu Jin HONG
-
Publication number: 20130202162Abstract: A method of reconstructing a three-dimensional (3D) facial shape with super resolution even from a short moving picture having a front facial image by acquiring a super-resolution facial image by applying, as a weighting factor, a per-unit-patch similarity between a target frame and frames remaining after excluding the target frame from among a plurality of continuous frames including the front facial image, and reconstructing the 3D facial shape based on the acquired super-resolution facial image.Type: ApplicationFiled: February 1, 2013Publication date: August 8, 2013Applicant: Korea Institute of Science and TechnologyInventors: Ig Jae KIM, Jaewon KIM, Sang Chul AHN
-
Publication number: 20130120450Abstract: A method of providing an augmented reality tour platform service for the inside of a building by using a wireless communication device. The method includes: acquiring an image of the building from the wireless communication device; collecting information associated with the acquired image; extracting a candidate building group from a previously established database on the basis of the acquired image and the collected information; specifying a building matching the acquired image from among the extracted candidate building group; and transmitting information regarding the inside of the specified building to the wireless communication device.Type: ApplicationFiled: November 14, 2012Publication date: May 16, 2013Inventors: Ig Jae KIM, Jaewon KIM, Heedong KO
-
Publication number: 20130044962Abstract: A method and system for reconstructing an image displayed on an electronic device connected to a network, to be a high resolution image. The method of reconstructing a selected area of the image displayed on the electronic device connected to a network, to be a high resolution image, includes: receiving a request to expand the selected area; collecting images including the selected area from the Internet; correcting the selected area to have a high resolution while expanding the selected area based on the collected images; and displaying the image expanded to have a high resolution on the electronic device.Type: ApplicationFiled: July 10, 2012Publication date: February 21, 2013Inventors: Jaewon KIM, Ig Jae Kim, Sang Chul Ahn, Jong-Ho Lee
-
Patent number: 8320706Abstract: Provided are a method and an apparatus for tagging a photograph with information. The method of tagging a photograph with information, which calculates a shooting position of an input image from reference images having shooting position information to tag the shooting position information, includes: selecting a plurality of reference images; calculating a relative shooting position of the input image to the shooting positions of the reference images; calculating the shooting position of the input image on the basis of the calculation result of the calculating; and storing the shooting position and shooting direction information on the input image in an exchangeable image file format (EXIF) tag of the input image.Type: GrantFiled: June 11, 2009Date of Patent: November 27, 2012Assignee: Korea Institute of Science and TechnologyInventors: Ig Jae Kim, Hyoung Gon Kim, Sang Chul Ahn
-
Publication number: 20120249743Abstract: A method that highlights a depth-of-field (DOF) region of an image and performs additional image processing by using the DOF region. The method includes: obtaining a first pattern image and a second pattern image that are captured by emitting light according to different patterns from an illumination device; detecting a DOF region by using the first pattern image and the second pattern image; determining weights to highlight the DOF region; and generating the highlighted DOF image by applying the weights to a combined image of the first pattern image and the second pattern image.Type: ApplicationFiled: April 2, 2012Publication date: October 4, 2012Applicant: Korea Institute of Science and TechnologyInventors: Jaewon KIM, Ig Jae KIM, Sang Chul AHN
-
Patent number: 8180638Abstract: Disclosed herein is a method for emotion recognition based on a minimum classification error. In the method, a speaker's neutral emotion is extracted using a Gaussian mixture model (GMM), other emotions except the neutral emotion are classified using the Gaussian Mixture Model to which a discriminative weight for minimizing the loss function of a classification error for the feature vector for emotion recognition is applied. In the emotion recognition, the emotion recognition is performed by applying a discriminative weight evaluated using the Gaussian Mixture Model based on minimum classification error to feature vectors of the emotion classified with difficult, thereby enhancing the performance of emotion recognition.Type: GrantFiled: February 23, 2010Date of Patent: May 15, 2012Assignee: Korea Institute of Science and TechnologyInventors: Hyoung Gon Kim, Ig Jae Kim, Joon-Hyuk Chang, Kye Hwan Lee, Chang Seok Bae
-
Patent number: 7978075Abstract: Provided is an apparatus for recognizing an activity of daily living (ADL). The apparatus includes a radio frequency identification (RFID) reader for reading the information of an RFID tag to recognize a motion object, a motion detector attached on a moving subject for acquiring acceleration information and recognizing a motion characteristic, and a controller for receiving information on the motion object from the RFID reader and information on the motion characteristic from the motion detector and then recognizing an ADL.Type: GrantFiled: June 6, 2008Date of Patent: July 12, 2011Assignees: Korea Institute of Science and Technology, Electronics and Telecommunications Research InstituteInventors: Ig Jae Kim, Hyoung Gon Kim, Sang Chul Ahn
-
Publication number: 20100217595Abstract: Disclosed herein is a method for emotion recognition based on a minimum classification error. In the method, a speaker's neutral emotion is extracted using a Gaussian mixture model (GMM), other emotions except the neutral emotion are classified using the Gaussian Mixture Model to which a discriminative weight for minimizing the loss function of a classification error for the feature vector for emotion recognition is applied. In the emotion recognition, the emotion recognition is performed by applying a discriminative weight evaluated using the Gaussian Mixture Model based on minimum classification error to feature vectors of the emotion classified with difficult, thereby enhancing the performance of emotion recognition.Type: ApplicationFiled: February 23, 2010Publication date: August 26, 2010Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, Electronics and Telecommunications Research InstituteInventors: Hyoung Gon KIM, Ig Jae KIM, Joon-Hyuk CHANG, Kye Hwan LEE, Chang Seok BAE
-
Publication number: 20100191155Abstract: An apparatus for calculating calorie balance based on an activity classification, disclosed herein, includes a calculation part calculating characteristic values of acceleration and a user's calorie expenditure from the user's activities, and calculating food data and the user's calorie intake from foods taken by the user; and a recognition part recognizing the user's activities based on the characteristic values of acceleration, and recognizing the foods based on the food data. The characteristic values of acceleration are extracted from acceleration data of acceleration sensors, which determine the user's activities, and include information on the relationship between the acceleration data and the user's activities. The calculation part calculates calorie balance, using the user's calorie expenditure and the user's calorie intake.Type: ApplicationFiled: August 4, 2009Publication date: July 29, 2010Applicant: Korea Institute of Science and TechnologyInventors: Ig-Jae KIM, Hyoung Gon KIM, Sang Chul AHN
-
Publication number: 20100141795Abstract: Provided are a method and an apparatus for tagging a photograph with information. The method of tagging a photograph with information, which calculates a shooting position of an input image from reference images having shooting position information to tag the shooting position information, includes: selecting a plurality of reference images; calculating a relative shooting position of the input image to the shooting positions of the reference images; calculating the shooting position of the input image on the basis of the calculation result of the calculating; and storing the shooting position and shooting direction information on the input image in an exchangeable image file format (EXIF) tag of the input image.Type: ApplicationFiled: June 11, 2009Publication date: June 10, 2010Applicant: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Ig Jae Kim, Hyoung Gon Kim, Sang Chul Ahn
-
Publication number: 20100057455Abstract: A method for generating three-dimensional speech animation is provided using data-driven and machine learning approaches. It utilizes the most relevant part of the captured utterances for the synthesis of input phoneme sequences. If highly relevant data are missing or lacking, then it utilizes less relevant (but more abundant) data and relies more heavily on machine learning for the lip-synch generation.Type: ApplicationFiled: August 26, 2008Publication date: March 4, 2010Inventors: Ig-Jae Kim, Hyeong-Seok Ko
-
Publication number: 20090179739Abstract: Provided is an apparatus for recognizing an activity of daily living (ADL). The apparatus includes a radio frequency identification (RFID) reader for reading the information of an RFID tag to recognize a motion object, a motion detector attached on a moving subject for acquiring acceleration information and recognizing a motion characteristic, and a controller for receiving information on the motion object from the RFID reader and information on the motion characteristic from the motion detector and then recognizing an ADL.Type: ApplicationFiled: June 6, 2008Publication date: July 16, 2009Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, Electronics and Telecommunications Research InstituteInventors: Ig Jae KIM, Hyoung Gon KIM, Sang Chul AHN
-
Patent number: 7535472Abstract: In blendshape-based facial animation, two main approaches are used to create the key expressions: manual sculpting and statistically-based techniques. Hand-generated expressions have the advantage of being intuitively recognizable, thus allowing animators to use conventional keyframe control. However, they may cover only a fraction of the expression space, resulting in large reproduction animation errors. On the other hand, statistically-based techniques produce eigenfaces that give minimal reproduction errors but are visually non-intuitive. In the invention the applicants propose a technique to convert a given set of hand-generated key expressions into another set of so-called quasi-eigen faces. The resulting expressions resemble the original hand-generated expressions, but have expression space coverages more like those of statistically generated expression bases. The effectiveness of the proposed technique is demonstrated by applying it to hand-generated expressions.Type: GrantFiled: April 5, 2006Date of Patent: May 19, 2009Assignee: Seoul National University Industry FoundationInventors: Ig-Jae Kim, Hyeong-Seok Ko
-
Patent number: 7468742Abstract: The present invention discloses an interactive presentation system which allows a presenter to perform a presentation while having various interactions directly with presentation material images in real time through a gesture or/and voice. The interactive presentation system comprises: an active infrared camera; a command recognition system connected to the active infrared camera; and an image synthesis system connected to the active infrared camera and the command recognition system. The presentation system may further comprises a stereo camera set for properly synthesizing a presenter in a 3D image and a 3D motion system. By this configuration, it is possible to embody an interactive presentation system in which a command through a presenter's gesture or voice is processed in real time and the image of the presenter is synthesized in a presentation material screen in real time, and accordingly the audiovisual effect is maximized.Type: GrantFiled: December 2, 2004Date of Patent: December 23, 2008Assignee: Korea Institute of Science and TechnologyInventors: Sang-Chul Ahn, Hyoung-Gon Kim, Ig-Jae Kim, Chang-Sik Hwang