Patents by Inventor Bon Woo Hwang

Bon Woo Hwang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150228081
    Abstract: Provided is a method and apparatus for reconstructing a three-dimensional (3D) face based on a stereo camera, the method including: acquiring n images of a target by controlling a plurality of stereo cameras in response to an image acquirement request, wherein n denotes a natural number; extracting n face regions from the n images, respectively; and reconstructing a viewpoint-based face image based on the n face regions.
    Type: Application
    Filed: January 30, 2015
    Publication date: August 13, 2015
    Inventors: Kap Kee KIM, Seung Uk YOON, Bon Woo HWANG, Seong Jae LIM, Hye Ryeong JUN, Jin Sung CHOI, Bon Ki KOO
  • Publication number: 20150172637
    Abstract: Disclosed are an apparatus and a method for generating three-dimensional output data, in which the appearance or face of a user is easily restored in a three-dimensional manner by using one or a plurality of cameras including a depth sensor, a three-dimensional avatar for an individual, which is produced through three-dimensional model transition, and data capable of being three-dimensionally output, which is generated based on the three-dimensional avatar for an individual. The apparatus includes an acquisition unit that acquires a three-dimensional model based on depth information and a color image from at least one point of view, a selection unit that selects at least one of three-dimensional template models, and a generation unit that modifies at least one of a plurality of three-dimensional template models selected by the selection unit and generates three-dimensional output data based on the three-dimensional model acquired by the acquisition unit.
    Type: Application
    Filed: December 12, 2014
    Publication date: June 18, 2015
    Inventors: Seung-Uk YOON, Bon-Woo HWANG, Seong-Jae LIM, Kap-Kee KIM, Hye-Ryeong JUN, Jin-Sung CHOI, Bon-Ki KOO
  • Patent number: 8902305
    Abstract: A system for managing face data includes a global face capturing unit configured to capture a global face image; and a global face data generation unit configured to obtain shape information and texture information of global face data, and generate the global face data. Further, the system includes a local face capturing unit configured to capture a plurality of local face images; and a global face posture extraction unit configured to estimate a position and a direction of the face of a captured user. Furthermore, the system includes a local capturing device posture extraction unit configured to extract posture information of the local face capturing unit; and a local face data generation unit configured to generate texture information and shape information, and generate local face data.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: December 2, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Bon Woo Hwang, Kap Kee Kim, Seung-Uk Yoon, Bonki Koo, Ji Hyung Lee
  • Patent number: 8830269
    Abstract: A method of deforming a shape of a human body model includes the steps of reorganizing human body model data into a joint-skeleton structure-based Non-Uniform Rational B-spline (NURBS) surface model, generating statistical deformation information about control parameters of the NURBS surface model based on parameters of joints and key section curves for specific motions, and deforming the shape of the human body model based on the NURBS surface model and the statistical deformation information. The human body model data includes three-dimensional (3D) human body scan data and a 3D polygon mesh.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: September 9, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seong Jae Lim, Ho Won Kim, Il Kyu Park, Ji Young Park, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Bon Woo Hwang, Bon Ki Koo
  • Patent number: 8818077
    Abstract: The present invention relates to a stereo image matching apparatus and method. The stereo matching apparatus includes a window image extraction unit for extracting window images, each having a predetermined size around a selected pixel, for individual pixels of images that constitute stereo images. A local support-area determination unit extracts a similarity mask having similarities equal to or greater than a threshold and a local support-area mask having neighbor connections to a center pixel of the similarity mask, from each of similarity images generated depending on differences in similarity between pixels of the window images. A similarity extraction unit calculates a local support-area similarity from a sum of similarities of a local support-area. A disparity selection unit selects a pair of window images for which the local support-area similarity is maximized, from among the window images, and then determines a disparity for the stereo images.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: August 26, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Bon-Woo Hwang
  • Publication number: 20140204089
    Abstract: Disclosed is a method and apparatus for creating a three-dimensional (3D) montage. The apparatus for creating a 3D montage may include an image information extraction unit to extract image information from a face image to be reconstructed, using a face area based on statistical feature information and a feature vector, a 3D unique face reconstruction unit to reconstruct a 3D unique face model by fitting a 3D standard face model to face images of each view for the face image and feature information of each part for the face area, a 3D montage model generation unit to generate a 3D montage model by combining the reconstructed 3D unique face model with 3D face expression model information and 3D decoration model information, and a montage image generation unit to generate a montage image by projecting the generated 3D montage model from each view.
    Type: Application
    Filed: January 17, 2014
    Publication date: July 24, 2014
    Inventors: Seong Jae LIM, Kap Kee KIM, Seung Uk YOON, Bon Woo HWANG, Hye Ryeong JUN, Jin Sung CHOI, Bon Ki KOO
  • Publication number: 20140192045
    Abstract: Provided is a method and apparatus for generating a three-dimensional (3D) caricature. The apparatus for generating a 3D caricature may include a 3D face data generation unit to generate 3D face data of a user corresponding to a shape and a texture of a face of the user, a 3D unique face model generation unit to generate a 3D unique face model using a shape and a texture of a unique face based on the 3D face data and a reference face, and a 3D caricature generation unit to generate a 3D caricature using the 3D unique face model and a caricature base face model.
    Type: Application
    Filed: January 6, 2014
    Publication date: July 10, 2014
    Inventors: Bon Woo HWANG, Kap Kee KIM, Seong Jae LIM, Seung Uk YOON, Hye Ryeong JUN, Bon Ki KOO, Jin Sung CHOI
  • Publication number: 20140168216
    Abstract: A 3D avatar output device and method are disclosed. The 3D avatar output device of a vending machine type may include an input data receiving unit to receive input data including at least one of user information, a 3D avatar theme, and a 3D avatar output form; an image obtaining unit to obtain an image of a user through a camera included in the 3D avatar output device; a restoration model generation unit to generate a restoration model by extracting a facial area from the obtained image; a unique model generation unit to generate a unique model of the user based on the input data and the restoration model; and a 3D avatar output unit to generate a 3D avatar corresponding to the unique model and output the 3D avatar according to the 3D avatar output form.
    Type: Application
    Filed: December 11, 2013
    Publication date: June 19, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kap Kee KIM, Bon Woo HWANG, Seung Uk YOON, Seong Jae LIM, Hye Ryeong JUN, Jin Sung CHOI, Bon Ki KOO
  • Patent number: 8712146
    Abstract: The present invention relates to a method of creating an animatable digital clone includes receiving input multi-view images of an actor captured by at least two cameras and reconstructing a three-dimensional appearance therefrom, accepting shape information selectively based on a probability of photo-consistency in the input multi-view images obtained from the reconstruction and transferring a mesh topology of a reference human body model onto a shape of the actor obtained from the reconstruction. The method further includes generating an initial human body model of the actor via transfer of the mesh topology utilizing sectional shape information of the actor's joints, and generating a genuine human body model of the actor from learning genuine behavioral characteristics of the actor by applying the initial human body model to multi-view posture learning images where performance of a predefined motion by the actor is recorded.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: April 29, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho Won Kim, Seong Jae Lin, Bo Youn Kim, Il Kyu Park, Ji Young Park, Bon Ki Koo, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Bon Woo Hwang, Young Jik Lee
  • Patent number: 8538150
    Abstract: Embodiments of the present invention provide methods and apparatuses for segmenting multi-view images into foreground and background based on a codebook. For example, in some embodiments, an apparatus is provided that includes: (a) a background model generation unit for extracting a codebook from multi-view background images and generating codeword mapping tables operating in conjunction with the codebook; and (b) a foreground and background segmentation unit for segmenting multi-view images into foreground and background using the codebook and the codeword mapping tables.
    Type: Grant
    Filed: May 27, 2010
    Date of Patent: September 17, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kap Kee Kim, Bon Woo Hwang, Bon Ki Koo
  • Patent number: 8462149
    Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.
    Type: Grant
    Filed: April 16, 2009
    Date of Patent: June 11, 2013
    Assignees: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
  • Patent number: 8417034
    Abstract: Disclosed herein is an apparatus and method for separating a foreground and a background. The apparatus includes a background model creation unit for creating a code book including a plurality of code words in order to separate the foreground and the background, and a foreground/background separation unit for separating the foreground and the background using the created code book. The method includes the steps of creating a code book including a plurality of code words in order to separate the foreground and the background, rearranging the cord words of the created code book on the basis of the number of sample data that belong to each of the code words, and separating the foreground and the background using the code book.
    Type: Grant
    Filed: August 5, 2009
    Date of Patent: April 9, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kap Kee Kim, Bon Woo Hwang, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Ho Won Kim, Bon Ki Koo, Gil haeng Lee
  • Publication number: 20130057656
    Abstract: A system for managing face data includes a global face capturing unit configured to capture a global face image; and a global face data generation unit configured to obtain shape information and texture information of global face data, and generate the global face data. Further, the system includes a local face capturing unit configured to capture a plurality of local face images; and a global face posture extraction unit configured to estimate a position and a direction of the face of a captured user. Furthermore, the system includes a local capturing device posture extraction unit configured to extract posture information of the local face capturing unit; and a local face data generation unit configured to generate texture information and shape information, and generate local face data.
    Type: Application
    Filed: August 3, 2012
    Publication date: March 7, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Bon Woo HWANG, Kap Kee Kim, Seung-Uk Yoon, Bonki Koo, Ji Hyung Lee
  • Publication number: 20130050434
    Abstract: The present invention provides a local multi-resolution 3-D face-inherent model generation apparatus, including one or more 3-D facial model generation cameras for photographing a face of an object at various angles in order to obtain one or more 3-D face models, a 3-D face-inherent model generation unit for generating a 3-D face-inherent model by composing the one or more 3-D face models, a local photographing camera for photographing a local part of the face of the object, a control unit for controlling the position of the local photographing camera on the 3-D face-inherent model, and a local multi-resolution 3-D face-inherent model generation unit for generating a local multi-resolution face-inherent model by composing an image captured by the local photographing camera and the 3-D face-inherent model, a local multi-resolution 3-D face-inherent model generation using the local multi-resolution 3-D face-inherent model generation apparatus, and a skin management system.
    Type: Application
    Filed: July 31, 2012
    Publication date: February 28, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kap Kee KIM, Seung Uk Yoon, Bon Woo Hwang, Ji Hyung Lee, Bon Ki Koo
  • Publication number: 20120154393
    Abstract: Disclosed herein are an apparatus and method for creating animation by capturing the motions of a non-rigid object. The apparatus includes a geometry mesh reconstruction unit, a motion capture unit, and a content creation unit. The geometry mesh reconstruction unit receives moving images captured by a plurality of cameras, and generates a reconstruction mesh set for each frame. The motion capture unit generates mesh graph sets for the reconstruction mesh set and generates motion data, including motion information, using the mesh graph sets. The content creation unit creates three-dimensional (3D) content for a non-rigid object by generating a final transformation mesh set, having a topology similar to that of the reconstruction mesh set, using the motion data.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ji-Hyung LEE, Bon-Ki Koo, Yoon-Seok Choi, Jeung-Chul Park, Do-Hyung Kim, Bon-Woo Hwang, Kap-Kee Kim, Seong-Jae Lim, Han-Byul Joo, Seung-Uk Yoon
  • Publication number: 20120155747
    Abstract: The present invention relates to a stereo image matching apparatus and method. The stereo matching apparatus includes a window image extraction unit for extracting window images, each having a predetermined size around a selected pixel, for individual pixels of images that constitute stereo images. A local support-area determination unit extracts a similarity mask having similarities equal to or greater than a threshold and a local support-area mask having neighbor connections to a center pixel of the similarity mask, from each of similarity images generated depending on differences in similarity between pixels of the window images. A similarity extraction unit calculates a local support-area similarity from a sum of similarities of a local support-area. A disparity selection unit selects a pair of window images for which the local support-area similarity is maximized, from among the window images, and then determines a disparity for the stereo images.
    Type: Application
    Filed: December 15, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventor: Bon-Woo HWANG
  • Publication number: 20110234763
    Abstract: An apparatus for transmitting a multi-view stereoscopic video includes: a control unit configured to receive a group of stereoscopic images taken by a plurality of stereoscopic imaging devices; a generation unit configured to select at least one stereoscopic frame from stereoscopic frames of the received group of stereoscopic images, arrange the selected stereoscopic frames successively, and generate a multi-view stereoscopic video; an encoding unit configured to encode the generated multi-view stereoscopic video; and a transmission unit configured to transmit the encoded multi-view stereoscopic video through a transmission network.
    Type: Application
    Filed: October 21, 2010
    Publication date: September 29, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Bon-Woo HWANG, Kap-Kee KIM, Bonki KOO
  • Publication number: 20110142343
    Abstract: An apparatus for segmenting multi-view images into foreground and background based on a codebook includes: a background model generation unit for extracting a codebook from multi-view background images and generating codeword mapping tables operating in conjunction with the codebook; and a foreground and background segmentation unit for segmenting multi-view images into foreground and background using the codebook and the codeword mapping tables.
    Type: Application
    Filed: May 27, 2010
    Publication date: June 16, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kap Kee KIM, Bon Woo HWANG, Bon Ki KOO
  • Publication number: 20110123168
    Abstract: A multimedia application system uses metadata for sensory devices. The system includes: a sensory-device engine for generating a sensory device command (SDC) for controlling the sensory devices based on sensory effect information (SEI) generated to represent sensory effects by using the sensory devices depending on video contents, user preference information (UPI) of the sensory devices and device capability information (DCI) indicative of reproducing capability of the sensory devices; and a sensory-device controller for controlling sensory devices to perform sensory effect reproduction in response to the generated SDC.
    Type: Application
    Filed: June 19, 2009
    Publication date: May 26, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Maeng Sub Cho, Jin Seo Kim, Bon Ki Koo, Ji Hyung Lee, Chang Woo Chu, Ho Won Kim, II Kyu Park, Yoon-Seok Choi, Ji Young Park, Seong Jae Lim, Bon Woo Hwang, Jeung Chul Park, Kap Kee Kim, Sang-Kyun Kim, Yong-Soo Joo
  • Publication number: 20110046923
    Abstract: An apparatus for compressing low-complexity 3D mesh, includes: a data analyzing unit for decomposing data of an input 3D mesh model into vertices information, property information representing property of the 3D mesh model, and connectivity information between vertices constituting the 3D mesh model; a mesh model quantizing unit for producing quantized vertices, property and connectivity information of the 3D mesh model by using the vertices, property and connectivity information; and a sharable vertex analysis unit for analyzing sharing information between shared vertices of the 3D mesh model. Further, the apparatus includes a data modulation unit for performing a circular DPCM prediction by using quantized values of the consecutive connectivity information of the 3D mesh model; and an entropy encoding unit for outputting coded data of the quantized vertices and property information, and differential pulse-code modulated connectivity information as a bitstream.
    Type: Application
    Filed: April 6, 2009
    Publication date: February 24, 2011
    Applicants: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son