Patents by Inventor Chang-Woo Chu
Chang-Woo Chu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8238648Abstract: Provided is a camera self-calibration method that calculates a focal length of a fixed zoom lens camera from a correspondence point position between images. In the method, a cost function, which is a function of a focal length, is defined, and a focal length that minimizes the defined cost function is obtained to obtain a focal length that allows 3D recovery results of correspondence points calculated from all image pairs coincide with one another. Therefore, reliability of the calculated focal length can be easily verified, and the focal length of the camera can be stably calculated even when the position of input correspondence point is inaccurately given.Type: GrantFiled: November 27, 2007Date of Patent: August 7, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Jae Chul Kim, Chang Woo Chu, Ho Won Kim, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Bon Ki Koo
-
Publication number: 20120162215Abstract: The present invention relates to an apparatus and method for generating a texture of a 3D reconstructed object depending on a resolution level of a 2D image. The apparatus includes a 3D object reconstruction unit for extracting, from images captured from at least two areas located at different distances, information about a 3D object and information about cameras, and then reconstructing the 3D object. A resolution calculation unit measures size of a space area, covered by one pixel of each of the images in a photorealistic image of the 3D object, and then calculates resolutions of the images. A texture generation unit generates textures for respective levels corresponding to classified images by using the images classified according to resolution level. A rendering unit selects a texture for a relevant level depending on a size of the 3D object on a screen, and then renders the selected texture.Type: ApplicationFiled: December 21, 2011Publication date: June 28, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Young-Mi CHA, Chang-Woo Chu, Il-Kyu Park, Bon-Ki Koo
-
Publication number: 20120155745Abstract: Disclosed herein is an apparatus and method for extracting correspondences between aerial images. The apparatus includes a line extraction unit, a line direction determination unit, a building top area extraction unit, and a correspondence extraction unit. The line extraction unit extracts lines corresponding buildings from aerial images. The line direction determination unit defines the directions of the lines as x, y and z axis directions based on a two-dimensional (2D) coordinate system. The building top area extraction unit rotates lines in the x and y axis directions so that the lines are arranged in parallel with the horizontal and vertical directions of the 2D image, and then extracts building top areas from rectangles. The correspondence extraction unit extracts correspondences between the aerial images by comparing the locations of the building top areas extracted from the aerial images.Type: ApplicationFiled: December 14, 2011Publication date: June 21, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Il-Kyu PARK, Chang-Woo Chu, Young-Mi Cha, Bon-Ki Koo
-
Publication number: 20120020233Abstract: An apparatus for receiving data in a communication system includes a parser configured to receive multimedia data and analyze the multimedia data into a plurality of tokens; a plurality of decoding units configured to receive input tokens corresponding to them among the plurality of tokens and decode the multimedia data; and a scheduler configured to schedule the plurality of tokens and transmit the respective input tokens to the plurality of decoding units at precise times, wherein the plurality of decoding units decode the multimedia data by the input tokens transmitted from the scheduler and provide a multimedia service.Type: ApplicationFiled: July 20, 2011Publication date: January 26, 2012Applicants: Industry-University Cooperation Foundation Hanyang University, Electronics and Telecommunications Research InstituteInventors: Seung-Wook LEE, Bon-Ki Koo, Ho-Won Kim, Chang-Woo Chu, Ji-Hyung Lee, Euee-Seon Jang, Hyun-Gyu Kim, Min-Soo Park, So-Won Kim, Tae-Hee Lim
-
Publication number: 20110148875Abstract: The present invention relates to a method and apparatus for capturing a motion of a dynamic object, and restore appearance information of an object making a dynamic motion and motion information of main joints from multi-viewpoint video images of motion information of a dynamic object such as a human body, making a motion through a motion of a skeletal structure on the basis of the skeletal structure, acquired by using multiple cameras. According to the exemplary embodiments of the present invention, it is possible to restore motion information of the object making a dynamic motion by using only an image sensor for a visible light range and to reproduce a multi-viewpoint image by effectively storing the restored information. Further, it is possible to restore motion information of the dynamic object without attaching a specific marker.Type: ApplicationFiled: December 16, 2010Publication date: June 23, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Ho-Won KIM, Seong-Jae Lim, Han-Byul Joo, Hyun Kang, Bon-Ki Koo, Chang-Woo Chu
-
Publication number: 20110149074Abstract: Provided are a portable multi-view image acquisition system and a multi-view image preprocessing method. The portable multi-view image acquisition system may include: a portable studio including a plurality of cameras movable up, down, left and right; and a preprocessor performing a preprocessing including a subject separation from a multi-view image that is photographed by the plurality of cameras.Type: ApplicationFiled: December 17, 2010Publication date: June 23, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Seung Wook LEE, Ho Won KIM, Chang Woo CHU, Bon Ki KOO
-
Publication number: 20110149093Abstract: Disclosed are an apparatus and a method for automatic control of multiple cameras capable of supporting an effective camera view angle in a broadcast, a movie, etc. The automatic control apparatus of multiple cameras includes: a first main camera; a first camera driver controlling an operation of the first main camera; a second main camera; a second camera driver controlling an operation of the second main camera; at least one auxiliary camera; at least one third camera driver controlling an operation of the at least one auxiliary camera; and an interoperation processor changing a view angle of the at least one auxiliary camera by controlling the at least one third camera driver in accordance with a view angle changing reference changed by changing the view angle of the first main camera, the second main camera, or the first and second main cameras.Type: ApplicationFiled: August 25, 2010Publication date: June 23, 2011Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITInventors: Hyun KANG, Ho Won Kim, Chang Woo Chu, Bon Ki Koo
-
Publication number: 20110148866Abstract: Disclosed herein is a 3D urban modeling apparatus and method. The 3D urban modeling apparatus includes a calibration unit for calibrating data about a translation and a rotation of at least one capturing device at a time that input aerial images and terrestrial images were captured. A building model generation unit generates at least one 3D building model based on the aerial images and the terrestrial images to which results of the calibration have been applied. A terrain model generation unit generates a 3D terrain model by converting an input digital elevation model into a 3D mesh. A texture extraction unit extracts textures related to the building model and the terrain model from the aerial images and the terrestrial images. A model matching unit generates a 3D urban model by matching the building model with the terrain model, which are based on the textures, with each other.Type: ApplicationFiled: December 17, 2010Publication date: June 23, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Chang-Woo CHU, Ho-Won Kim, Bon-Ki Koo
-
Publication number: 20110123168Abstract: A multimedia application system uses metadata for sensory devices. The system includes: a sensory-device engine for generating a sensory device command (SDC) for controlling the sensory devices based on sensory effect information (SEI) generated to represent sensory effects by using the sensory devices depending on video contents, user preference information (UPI) of the sensory devices and device capability information (DCI) indicative of reproducing capability of the sensory devices; and a sensory-device controller for controlling sensory devices to perform sensory effect reproduction in response to the generated SDC.Type: ApplicationFiled: June 19, 2009Publication date: May 26, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Maeng Sub Cho, Jin Seo Kim, Bon Ki Koo, Ji Hyung Lee, Chang Woo Chu, Ho Won Kim, II Kyu Park, Yoon-Seok Choi, Ji Young Park, Seong Jae Lim, Bon Woo Hwang, Jeung Chul Park, Kap Kee Kim, Sang-Kyun Kim, Yong-Soo Joo
-
Publication number: 20110046923Abstract: An apparatus for compressing low-complexity 3D mesh, includes: a data analyzing unit for decomposing data of an input 3D mesh model into vertices information, property information representing property of the 3D mesh model, and connectivity information between vertices constituting the 3D mesh model; a mesh model quantizing unit for producing quantized vertices, property and connectivity information of the 3D mesh model by using the vertices, property and connectivity information; and a sharable vertex analysis unit for analyzing sharing information between shared vertices of the 3D mesh model. Further, the apparatus includes a data modulation unit for performing a circular DPCM prediction by using quantized values of the consecutive connectivity information of the 3D mesh model; and an entropy encoding unit for outputting coded data of the quantized vertices and property information, and differential pulse-code modulated connectivity information as a bitstream.Type: ApplicationFiled: April 6, 2009Publication date: February 24, 2011Applicants: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang UniversityInventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
-
Publication number: 20110037763Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.Type: ApplicationFiled: April 16, 2009Publication date: February 17, 2011Applicants: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang UniversityInventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
-
Patent number: 7812839Abstract: Provided is a method for creating a 3-D curved surface by using corresponding curves in a plurality of images. The method includes performing an NURBS fitting curve with respect to one image in a plurality of images having camera calibration and extracted camera parameter by using control points designated in a curve characterizing a subject shape. When the curve fitting is performed with respect to the curve that commonly exists in more than two images, a 3-D curve is created by using a camera calibration information, or a 3-D curved surface is created by creating a plurality of 3-D curves or straight lines. Therefore, a 3-D curved surface model can be easily and quickly created by simplifying a complex modeling process for an actual object modeling into an actual image-based modeling process.Type: GrantFiled: December 7, 2006Date of Patent: October 12, 2010Assignee: Electronics and Telecommunications Research InstituteInventors: Chang Woo Chu, Jae Chul Kim, In Kyu Park, Bon Ki Koo
-
Publication number: 20100156935Abstract: A method of deforming a shape of a human body model includes the steps of reorganizing human body model data into a joint-skeleton structure-based Non-Uniform Rational B-spline (NURBS) surface model, generating statistical deformation information about control parameters of the NURBS surface model based on parameters of joints and key section curves for specific motions, and deforming the shape of the human body model based on the NURBS surface model and the statistical deformation information. The human body model data includes three-dimensional (3D) human body scan data and a 3D polygon mesh.Type: ApplicationFiled: June 30, 2009Publication date: June 24, 2010Applicant: Electronics and Telecommunications Research InstituteInventors: Seong Jae LIM, Ho Won Kim, Il Kyu Park, Ji Young Park, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Bon Woo Hwang, Bon Ki Koo
-
Publication number: 20100156901Abstract: A method of reconstructing a 3D model includes reconstructing a 3D voxel-based visual hull model using input images of an object captured by a multi view camera; converting the 3D voxel-based visual hull model into a mesh model; and generating a result of view-dependent rendering of a 3D model by performing the view-dependent texture mapping on the mesh model obtained through the conversion. Further, the reconstructing includes defining a 3D voxel space to be reconstructed; and excluding voxels not belonging to the object from the defined 3D voxel space.Type: ApplicationFiled: June 18, 2009Publication date: June 24, 2010Applicant: Electronics and Telecommunications Research InstituteInventors: Ji Young PARK, Il Kyu Park, Ho Won Kim, Jin Seo Kim, Ji Hyung Lee, Seung Wook Lee, Chang Woo Chu, Seong Jae Lim, Bon Woo Hwang, Bon Ki Koo, Gil Haeng Lee
-
Publication number: 20100157020Abstract: The invention provides a camera controlling and image storing apparatus for synchronized multiple image acquisition and method thereof, which cut down equipment costs by taking a software approach with respect to various demands of viewers for a broadcasting image. A camera controlling and image storing apparatus, the apparatus includes an image acquiring unit for acquiring synchronized images from multiple cameras losslessly, one or more ingest agents for storing the acquired images, and controlling pan, tilt and zoom operations of the cameras based on the acquired images, and a central server for transmitting a control command to the ingest agents, and receiving and integrating collectively the stored images of the ingets agent.Type: ApplicationFiled: October 1, 2009Publication date: June 24, 2010Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Yoon-Seok CHOI, Jeung Chul PARK, Chang Woo CHU, Ji Hyung Lee, Jin Seo KIM, Seung Wook LEE, Ho Won KIM, Bon Woo HWANG, Bon Ki KOO, Gil haeng LEE
-
Publication number: 20100158372Abstract: Disclosed herein is an apparatus and method for separating a foreground and a background. The apparatus includes a background model creation unit for creating a code book including a plurality of code words in order to separate the foreground and the background, and a foreground/background separation unit for separating the foreground and the background using the created code book. The method includes the steps of creating a code book including a plurality of code words in order to separate the foreground and the background, rearranging the cord words of the created code book on the basis of the number of sample data that belong to each of the code words, and separating the foreground and the background using the code book.Type: ApplicationFiled: August 5, 2009Publication date: June 24, 2010Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kap Kee KIM, Bon Woo Hwang, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Ho Won Kim, Bon Ki Koo, Gil Haeng Lee
-
Publication number: 20100158354Abstract: The present invention relates to a method of creating an animatable digital clone includes receiving input multi-view images of an actor captured by at least two cameras and reconstructing a three-dimensional appearance therefrom, accepting shape information selectively based on a probability of photo-consistency in the input multi-view images obtained from the reconstruction and transferring a mesh topology of a reference human body model onto a shape of the actor obtained from the reconstruction. The method further includes generating an initial human body model of the actor via transfer of the mesh topology utilizing sectional shape information of the actor's joints, and generating a genuine human body model of the actor from learning genuine behavioral characteristics of the actor by applying the initial human body model to multi-view posture learning images where performance of a predefined motion by the actor is recorded.Type: ApplicationFiled: November 5, 2009Publication date: June 24, 2010Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ho Won KIM, Seong Jae Lin, Bo Youn Kim, Il Kyu Park, Ji Young Park, Bon Ki Koo, Jl Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Bon Woo Hwang, Young Jik Lee
-
Publication number: 20100157022Abstract: A method of implementing a motion control camera effect includes: inputting images of an object captured at one or more points of view; inputting a control parameter containing a motion control camera effect to be applied to the input images, a target time of the motion control camera effect, and a reproducing speed; extracting frames of the target time from the input images; processing the frames using software based on the control parameter; and outputting the processed frames at the reproducing speed. The motion control camera effect includes at least one of a motion control camera effect for a fixed viewpoint still object, a motion control camera effect for a fixed viewpoint moving object, a motion control camera effect for a free viewpoint still object, a motion control camera effect for a free viewpoint moving object, and a dual motion control camera effect.Type: ApplicationFiled: June 30, 2009Publication date: June 24, 2010Applicant: Electronics and Telecommunications Research InstituteInventors: Yoon-Seok Choi, Jeung Chul Park, Chang Woo Chu, Ji Hyung Lee, Bon Ki Koo, Jin Seo Kim, Seung Woook Lee, Ho Won Kim, Bon Woo Hwang, Young Jik Lee
-
Patent number: 7684614Abstract: A method for modeling a three dimensional shape of object using a level set solution on a partial differential equation derived from a Helmholtz reciprocity condition is provided. The method includes the steps of: a) inputting an image pair satisfying a Helmholtz reciprocity condition; b) performing an optical correction and simultaneously performing a geometric correction; c) performing a camera selection to select cameras capable of seeing a point (X, Y, Z), and defining and calculating a cost function by the Helmholtz reciprocity condition; d) calculating a speed function of a PDE that minimizes the cost function obtained in the step c); and e) generating a three dimension mesh model from a set of points configuring the object surface provided from the step d), and deciding a final three dimension mesh model by comparing cost function values.Type: GrantFiled: August 1, 2006Date of Patent: March 23, 2010Assignee: Electronics and Telecommunications Research InstituteInventors: Jae Chul Kim, Chang Woo Chu, Bon Ki Koo
-
Patent number: 7623701Abstract: Provided is an apparatus for reconstructing a 3D shape of an object by using Helmholtz stereopsis. The apparatus includes: an image capturer for capturing two pairs of images satisfying Helmholtz reciprocity condition; a preprocessor for estimating optical and geometrical characteristics of a camera used for the image capturing; and a geometrical data reconstructor for converting Helmholtz restrictions generated from the respective pairs of the images into partial differential equations on the basis of the estimated optical and geometrical characteristics, and reconstructing 3D surface data of an object with depth discontinuity at a boundary between regions on the basis of the partial differential equations.Type: GrantFiled: December 6, 2005Date of Patent: November 24, 2009Assignee: Electronics and Telecommunications Research InstituteInventors: Jae-Chul Kim, Chang-Woo Chu, Bon-Ki Koo, Byoung-Tae Choi, Hyun-Bin Kim