Patents by Inventor Kap Kee Kim
Kap Kee Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140204089Abstract: Disclosed is a method and apparatus for creating a three-dimensional (3D) montage. The apparatus for creating a 3D montage may include an image information extraction unit to extract image information from a face image to be reconstructed, using a face area based on statistical feature information and a feature vector, a 3D unique face reconstruction unit to reconstruct a 3D unique face model by fitting a 3D standard face model to face images of each view for the face image and feature information of each part for the face area, a 3D montage model generation unit to generate a 3D montage model by combining the reconstructed 3D unique face model with 3D face expression model information and 3D decoration model information, and a montage image generation unit to generate a montage image by projecting the generated 3D montage model from each view.Type: ApplicationFiled: January 17, 2014Publication date: July 24, 2014Inventors: Seong Jae LIM, Kap Kee KIM, Seung Uk YOON, Bon Woo HWANG, Hye Ryeong JUN, Jin Sung CHOI, Bon Ki KOO
-
Publication number: 20140192045Abstract: Provided is a method and apparatus for generating a three-dimensional (3D) caricature. The apparatus for generating a 3D caricature may include a 3D face data generation unit to generate 3D face data of a user corresponding to a shape and a texture of a face of the user, a 3D unique face model generation unit to generate a 3D unique face model using a shape and a texture of a unique face based on the 3D face data and a reference face, and a 3D caricature generation unit to generate a 3D caricature using the 3D unique face model and a caricature base face model.Type: ApplicationFiled: January 6, 2014Publication date: July 10, 2014Inventors: Bon Woo HWANG, Kap Kee KIM, Seong Jae LIM, Seung Uk YOON, Hye Ryeong JUN, Bon Ki KOO, Jin Sung CHOI
-
Publication number: 20140168216Abstract: A 3D avatar output device and method are disclosed. The 3D avatar output device of a vending machine type may include an input data receiving unit to receive input data including at least one of user information, a 3D avatar theme, and a 3D avatar output form; an image obtaining unit to obtain an image of a user through a camera included in the 3D avatar output device; a restoration model generation unit to generate a restoration model by extracting a facial area from the obtained image; a unique model generation unit to generate a unique model of the user based on the input data and the restoration model; and a 3D avatar output unit to generate a 3D avatar corresponding to the unique model and output the 3D avatar according to the 3D avatar output form.Type: ApplicationFiled: December 11, 2013Publication date: June 19, 2014Applicant: Electronics and Telecommunications Research InstituteInventors: Kap Kee KIM, Bon Woo HWANG, Seung Uk YOON, Seong Jae LIM, Hye Ryeong JUN, Jin Sung CHOI, Bon Ki KOO
-
Patent number: 8538150Abstract: Embodiments of the present invention provide methods and apparatuses for segmenting multi-view images into foreground and background based on a codebook. For example, in some embodiments, an apparatus is provided that includes: (a) a background model generation unit for extracting a codebook from multi-view background images and generating codeword mapping tables operating in conjunction with the codebook; and (b) a foreground and background segmentation unit for segmenting multi-view images into foreground and background using the codebook and the codeword mapping tables.Type: GrantFiled: May 27, 2010Date of Patent: September 17, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Kap Kee Kim, Bon Woo Hwang, Bon Ki Koo
-
Patent number: 8462149Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.Type: GrantFiled: April 16, 2009Date of Patent: June 11, 2013Assignees: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang UniversityInventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
-
Patent number: 8417034Abstract: Disclosed herein is an apparatus and method for separating a foreground and a background. The apparatus includes a background model creation unit for creating a code book including a plurality of code words in order to separate the foreground and the background, and a foreground/background separation unit for separating the foreground and the background using the created code book. The method includes the steps of creating a code book including a plurality of code words in order to separate the foreground and the background, rearranging the cord words of the created code book on the basis of the number of sample data that belong to each of the code words, and separating the foreground and the background using the code book.Type: GrantFiled: August 5, 2009Date of Patent: April 9, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Kap Kee Kim, Bon Woo Hwang, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Ho Won Kim, Bon Ki Koo, Gil haeng Lee
-
Publication number: 20130057656Abstract: A system for managing face data includes a global face capturing unit configured to capture a global face image; and a global face data generation unit configured to obtain shape information and texture information of global face data, and generate the global face data. Further, the system includes a local face capturing unit configured to capture a plurality of local face images; and a global face posture extraction unit configured to estimate a position and a direction of the face of a captured user. Furthermore, the system includes a local capturing device posture extraction unit configured to extract posture information of the local face capturing unit; and a local face data generation unit configured to generate texture information and shape information, and generate local face data.Type: ApplicationFiled: August 3, 2012Publication date: March 7, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Bon Woo HWANG, Kap Kee Kim, Seung-Uk Yoon, Bonki Koo, Ji Hyung Lee
-
Publication number: 20130050434Abstract: The present invention provides a local multi-resolution 3-D face-inherent model generation apparatus, including one or more 3-D facial model generation cameras for photographing a face of an object at various angles in order to obtain one or more 3-D face models, a 3-D face-inherent model generation unit for generating a 3-D face-inherent model by composing the one or more 3-D face models, a local photographing camera for photographing a local part of the face of the object, a control unit for controlling the position of the local photographing camera on the 3-D face-inherent model, and a local multi-resolution 3-D face-inherent model generation unit for generating a local multi-resolution face-inherent model by composing an image captured by the local photographing camera and the 3-D face-inherent model, a local multi-resolution 3-D face-inherent model generation using the local multi-resolution 3-D face-inherent model generation apparatus, and a skin management system.Type: ApplicationFiled: July 31, 2012Publication date: February 28, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Kap Kee KIM, Seung Uk Yoon, Bon Woo Hwang, Ji Hyung Lee, Bon Ki Koo
-
Patent number: 8249867Abstract: A microphone-array-based speech recognition system using a blind source separation (BBS) and a target speech extraction method in the system are provided. The speech recognition system performs an independent component analysis (ICA) to separate mixed signals input through a plurality of microphone into sound-source signals, extracts one target speech spoken for speech recognition from the separated sound-source signals by using a Gaussian mixture model (GMM) or a hidden Markov Model (HMM), and automatically recognizes a desired speech from the extracted target speech. Accordingly, it is possible to obtain a high speech recognition rate even in a noise environment.Type: GrantFiled: September 30, 2008Date of Patent: August 21, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Hoon Young Cho, Yun Keun Lee, Jeom Ja Kang, Byung Ok Kang, Kap Kee Kim, Sung Joo Lee, Ho Young Jung, Hoon Chung, Jeon Gue Park, Hyung Bae Jeon
-
Patent number: 8219396Abstract: An apparatus for evaluating the performance of speech recognition includes a speech database for storing N-number of test speech signals for evaluation. A speech recognizer is located in an actual environment and executes the speech recognition of the test speech signals reproduced using a loud speaker from the speech database in the actual environment to produce speech recognition results. A performance evaluation module evaluates the performance of the speech recognition by comparing correct recognition results answers with the speech recognition results.Type: GrantFiled: December 16, 2008Date of Patent: July 10, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Hoon-Young Cho, Yunkeun Lee, Ho-Young Jung, Byung Ok Kang, Jeom Ja Kang, Kap Kee Kim, Sung Joo Lee, Hoon Chung, Jeon Gue Park, Hyung-Bae Jeon
-
Publication number: 20120154393Abstract: Disclosed herein are an apparatus and method for creating animation by capturing the motions of a non-rigid object. The apparatus includes a geometry mesh reconstruction unit, a motion capture unit, and a content creation unit. The geometry mesh reconstruction unit receives moving images captured by a plurality of cameras, and generates a reconstruction mesh set for each frame. The motion capture unit generates mesh graph sets for the reconstruction mesh set and generates motion data, including motion information, using the mesh graph sets. The content creation unit creates three-dimensional (3D) content for a non-rigid object by generating a final transformation mesh set, having a topology similar to that of the reconstruction mesh set, using the motion data.Type: ApplicationFiled: December 21, 2011Publication date: June 21, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Ji-Hyung LEE, Bon-Ki Koo, Yoon-Seok Choi, Jeung-Chul Park, Do-Hyung Kim, Bon-Woo Hwang, Kap-Kee Kim, Seong-Jae Lim, Han-Byul Joo, Seung-Uk Yoon
-
Publication number: 20120155743Abstract: Disclosed herein are an apparatus and method for correcting a disparity map. The apparatus includes a disparity map area setting unit, a pose estimation unit, and a disparity map correction unit. The apparatus removes the noise of the disparity map attributable to stereo matching and also fills in holes attributable to occlusion using information about the depth of a 3-dimensional (3D) model produced in a preceding frame of a current frame, thereby improving a disparity map and depth performance and providing high-accuracy depth information to an application to be used.Type: ApplicationFiled: December 12, 2011Publication date: June 21, 2012Applicant: Electronics and Telecommunications Research InstituteInventor: Kap-Kee KIM
-
Publication number: 20110234763Abstract: An apparatus for transmitting a multi-view stereoscopic video includes: a control unit configured to receive a group of stereoscopic images taken by a plurality of stereoscopic imaging devices; a generation unit configured to select at least one stereoscopic frame from stereoscopic frames of the received group of stereoscopic images, arrange the selected stereoscopic frames successively, and generate a multi-view stereoscopic video; an encoding unit configured to encode the generated multi-view stereoscopic video; and a transmission unit configured to transmit the encoded multi-view stereoscopic video through a transmission network.Type: ApplicationFiled: October 21, 2010Publication date: September 29, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Bon-Woo HWANG, Kap-Kee KIM, Bonki KOO
-
Publication number: 20110142343Abstract: An apparatus for segmenting multi-view images into foreground and background based on a codebook includes: a background model generation unit for extracting a codebook from multi-view background images and generating codeword mapping tables operating in conjunction with the codebook; and a foreground and background segmentation unit for segmenting multi-view images into foreground and background using the codebook and the codeword mapping tables.Type: ApplicationFiled: May 27, 2010Publication date: June 16, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Kap Kee KIM, Bon Woo HWANG, Bon Ki KOO
-
Publication number: 20110123168Abstract: A multimedia application system uses metadata for sensory devices. The system includes: a sensory-device engine for generating a sensory device command (SDC) for controlling the sensory devices based on sensory effect information (SEI) generated to represent sensory effects by using the sensory devices depending on video contents, user preference information (UPI) of the sensory devices and device capability information (DCI) indicative of reproducing capability of the sensory devices; and a sensory-device controller for controlling sensory devices to perform sensory effect reproduction in response to the generated SDC.Type: ApplicationFiled: June 19, 2009Publication date: May 26, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Maeng Sub Cho, Jin Seo Kim, Bon Ki Koo, Ji Hyung Lee, Chang Woo Chu, Ho Won Kim, II Kyu Park, Yoon-Seok Choi, Ji Young Park, Seong Jae Lim, Bon Woo Hwang, Jeung Chul Park, Kap Kee Kim, Sang-Kyun Kim, Yong-Soo Joo
-
Publication number: 20110046923Abstract: An apparatus for compressing low-complexity 3D mesh, includes: a data analyzing unit for decomposing data of an input 3D mesh model into vertices information, property information representing property of the 3D mesh model, and connectivity information between vertices constituting the 3D mesh model; a mesh model quantizing unit for producing quantized vertices, property and connectivity information of the 3D mesh model by using the vertices, property and connectivity information; and a sharable vertex analysis unit for analyzing sharing information between shared vertices of the 3D mesh model. Further, the apparatus includes a data modulation unit for performing a circular DPCM prediction by using quantized values of the consecutive connectivity information of the 3D mesh model; and an entropy encoding unit for outputting coded data of the quantized vertices and property information, and differential pulse-code modulated connectivity information as a bitstream.Type: ApplicationFiled: April 6, 2009Publication date: February 24, 2011Applicants: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang UniversityInventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
-
Publication number: 20110037763Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.Type: ApplicationFiled: April 16, 2009Publication date: February 17, 2011Applicants: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang UniversityInventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
-
Publication number: 20100158372Abstract: Disclosed herein is an apparatus and method for separating a foreground and a background. The apparatus includes a background model creation unit for creating a code book including a plurality of code words in order to separate the foreground and the background, and a foreground/background separation unit for separating the foreground and the background using the created code book. The method includes the steps of creating a code book including a plurality of code words in order to separate the foreground and the background, rearranging the cord words of the created code book on the basis of the number of sample data that belong to each of the code words, and separating the foreground and the background using the code book.Type: ApplicationFiled: August 5, 2009Publication date: June 24, 2010Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kap Kee KIM, Bon Woo Hwang, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Ho Won Kim, Bon Ki Koo, Gil Haeng Lee
-
Publication number: 20090157399Abstract: An apparatus for evaluating the performance of speech recognition includes a speech database for storing N-number of test speech signals for evaluation. A speech recognizer is located in an actual environment and executes the speech recognition of the test speech signals reproduced using a loud speaker from the speech database in the actual environment to produce speech recognition results. A performance evaluation module evaluates the performance of the speech recognition by comparing correct recognition results answers with the speech recognition results.Type: ApplicationFiled: December 16, 2008Publication date: June 18, 2009Applicant: Electronics and Telecommunications Research InstituteInventors: Hoon-Young CHO, Yunkeun Lee, Ho-Young Jung, Byung Ok Kang, Jeom Ja Kang, Kap Kee Kim, Sung Joo Lee, Hoon Chung, Jeon Gue Park, Hyung-Bae Jeon
-
Publication number: 20090150146Abstract: A microphone-array-based speech recognition system using a blind source separation (BBS) and a target speech extraction method in the system are provided. The speech recognition system performs an independent component analysis (ICA) to separate mixed signals input through a plurality of microphone into sound-source signals, extracts one target speech spoken for speech recognition from the separated sound-source signals by using a Gaussian mixture model (GMM) or a hidden Markov Model (HMM), and automatically recognizes a desired speech from the extracted target speech. Accordingly, it is possible to obtain a high speech recognition rate even in a noise environment.Type: ApplicationFiled: September 30, 2008Publication date: June 11, 2009Applicant: ELECTRONICS & TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hoon Young CHO, Yun Keun Lee, Jeom Ja Kang, Byung Ok Kang, Kap Kee Kim, Sung Joo Lee, Ho Young Jung, Hoon Chung, Jeon Gue Park, Hyung Bae Jeon