Patents by Inventor Chang-Woo Chu

Chang-Woo Chu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9147249
    Abstract: Disclosed is a method of calibrating a depth image based on a relationship between a depth sensor and a color camera, and an apparatus for calibrating a depth image may include a three-dimensional (3D) point determiner to determine a 3D point of a camera image and a 3D point of a depth image simultaneously captured with the camera image, a calibration information determiner to determine calibration information for calibrating an error of a depth image captured by the depth sensor and a geometric information between the depth sensor and a color camera, using the 3D point of the camera image and the 3D point of the depth image, and a depth image calibrator to calibrate the depth image based on the calibration information and the 3D point of the depth image.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: September 29, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jae Hean Kim, Chang Woo Chu, Il Kyu Park, Young Mi Cha, Jin Sung Choi, Bon Ki Koo
  • Publication number: 20150213644
    Abstract: Provided are a multi-primitive fitting method including an acquiring point cloud data by collecting data of each of input points, a obtaining a segment for the points using the point cloud data, and a performing primitive fitting using data of points included in the segment and the point cloud data, and a multi-primitive fitting device that performs the method.
    Type: Application
    Filed: January 28, 2015
    Publication date: July 30, 2015
    Inventors: Young Mi CHA, Chang Woo CHU, Jae Hean KIM
  • Patent number: 9076219
    Abstract: A space segmentation method for 3D point clouds is disclosed. A space segmentation method for 3D point clouds includes equally segmenting a space of the 3D point clouds into a plurality of grid cells; establishing a base plane corresponding to a ground part of the space of the 3D point clouds; accumulating points of all grid cells located perpendicular to the base plane in a grid cell of the base plane; and segmenting the grid cell in which the points are accumulated into an object part and a ground part according to the number of accumulated points.
    Type: Grant
    Filed: August 31, 2012
    Date of Patent: July 7, 2015
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Youngmi Cha, Chang Woo Chu, Il Kyu Park, Ji Hyung Lee, Bonki Koo
  • Patent number: 8963943
    Abstract: Disclosed herein is a 3D urban modeling apparatus and method. The 3D urban modeling apparatus includes a calibration unit for calibrating data about a translation and a rotation of at least one capturing device at a time that input aerial images and terrestrial images were captured. A building model generation unit generates at least one 3D building model based on the aerial images and the terrestrial images to which results of the calibration have been applied. A terrain model generation unit generates a 3D terrain model by converting an input digital elevation model into a 3D mesh. A texture extraction unit extracts textures related to the building model and the terrain model from the aerial images and the terrestrial images. A model matching unit generates a 3D urban model by matching the building model with the terrain model, which are based on the textures, with each other.
    Type: Grant
    Filed: December 17, 2010
    Date of Patent: February 24, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Chang-Woo Chu, Ho-Won Kim, Bon-Ki Koo
  • Publication number: 20140334675
    Abstract: Provided is an apparatus and method for extracting a movement path, the movement path extracting apparatus including an image receiver to receive an image from a camera group in which a mutual positional relationship among cameras is fixed, a geographic coordinates receiver to receive geographic coordinates of a moving object on which the camera group is fixed, and a movement path extractor to extract a movement path of the camera group based on a direction and a position of a reference camera of the camera group using the image and the geographic coordinates.
    Type: Application
    Filed: May 6, 2014
    Publication date: November 13, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chang Woo CHU, Jae Hean KIM, Il Kyu PARK, Young Mi CHA, Jin Sung CHOI, Bon Ki KOO
  • Patent number: 8830269
    Abstract: A method of deforming a shape of a human body model includes the steps of reorganizing human body model data into a joint-skeleton structure-based Non-Uniform Rational B-spline (NURBS) surface model, generating statistical deformation information about control parameters of the NURBS surface model based on parameters of joints and key section curves for specific motions, and deforming the shape of the human body model based on the NURBS surface model and the statistical deformation information. The human body model data includes three-dimensional (3D) human body scan data and a 3D polygon mesh.
    Type: Grant
    Filed: June 30, 2009
    Date of Patent: September 9, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seong Jae Lim, Ho Won Kim, Il Kyu Park, Ji Young Park, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Bon Woo Hwang, Bon Ki Koo
  • Publication number: 20140245231
    Abstract: A primitive fitting apparatus is provided. The primitive fitting apparatus may include a selecting unit to receive, from a user, a selection of points used to fit a primitive a user desires to fit from a point cloud, an identifying unit to receive a selection of the primitive from the user and to identify the selected primitive, and a fitting unit to fit the primitive to correspond to the points, using the points and primitive.
    Type: Application
    Filed: February 28, 2014
    Publication date: August 28, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young Mi CHA, Chang Woo CHU, Jae Hean KIM, Il Kyu PARK, Bon Ki KOO, Jin Sung CHOI
  • Publication number: 20140218354
    Abstract: A view image providing device and method are provided. The view image providing device may include a panorama image generation unit to generate a panorama image using a cube map including a margin area by obtaining an omnidirectional image, a mesh information generation unit to generate 3-dimensional (3D) mesh information that uses the panorama image as a texture by obtaining 3D data, and a user data rendering unit to render the panorama image and the mesh information into user data according to a position and direction input by a user.
    Type: Application
    Filed: December 11, 2013
    Publication date: August 7, 2014
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Il Kyu PARK, Young Mi CHA, Chang Woo CHU, Jae Hean KIM, Jin Sung CHOI, Bon Ki KOO
  • Patent number: 8712146
    Abstract: The present invention relates to a method of creating an animatable digital clone includes receiving input multi-view images of an actor captured by at least two cameras and reconstructing a three-dimensional appearance therefrom, accepting shape information selectively based on a probability of photo-consistency in the input multi-view images obtained from the reconstruction and transferring a mesh topology of a reference human body model onto a shape of the actor obtained from the reconstruction. The method further includes generating an initial human body model of the actor via transfer of the mesh topology utilizing sectional shape information of the actor's joints, and generating a genuine human body model of the actor from learning genuine behavioral characteristics of the actor by applying the initial human body model to multi-view posture learning images where performance of a predefined motion by the actor is recorded.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: April 29, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho Won Kim, Seong Jae Lin, Bo Youn Kim, Il Kyu Park, Ji Young Park, Bon Ki Koo, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Bon Woo Hwang, Young Jik Lee
  • Publication number: 20140112574
    Abstract: Disclosed is a method of calibrating a depth image based on a relationship between a depth sensor and a color camera, and an apparatus for calibrating a depth image may include a three-dimensional (3D) point determiner to determine a 3D point of a camera image and a 3D point of a depth image simultaneously captured with the camera image, a calibration information determiner to determine calibration information for calibrating an error of a depth image captured by the depth sensor and a geometric information between the depth sensor and a color camera, using the 3D point of the camera image and the 3D point of the depth image, and a depth image calibrator to calibrate the depth image based on the calibration information and the 3D point of the depth image.
    Type: Application
    Filed: October 22, 2013
    Publication date: April 24, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jae Hean KIM, Chang Woo CHU, Il Kyu PARK, Young Mi CHA, Jin Sung CHOI, Bon Ki KOO
  • Patent number: 8659594
    Abstract: The present invention relates to a method and apparatus for capturing a motion of a dynamic object, and restore appearance information of an object making a dynamic motion and motion information of main joints from multi-viewpoint video images of motion information of a dynamic object such as a human body, making a motion through a motion of a skeletal structure on the basis of the skeletal structure, acquired by using multiple cameras. According to the exemplary embodiments of the present invention, it is possible to restore motion information of the object making a dynamic motion by using only an image sensor for a visible light range and to reproduce a multi-viewpoint image by effectively storing the restored information. Further, it is possible to restore motion information of the dynamic object without attaching a specific marker.
    Type: Grant
    Filed: December 16, 2010
    Date of Patent: February 25, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho-Won Kim, Seong-Jae Lim, Han-Byul Joo, Hyun Kang, Bon-Ki Koo, Chang-Woo Chu
  • Publication number: 20130207966
    Abstract: Disclosed are an apparatus and a method of producing a 3D model in which a 3D model having a static background is produced using a point cloud and an image obtained through 3D scanning. The apparatus includes an image matching unit for producing a matched image by matching a point cloud obtained by scanning a predetermined region to a camera image obtained by photographing the predetermined region; a mesh model processing unit for producing an object positioned in the region as a mesh model; and a 3D model processing unit for producing a 3D model for the object by reflecting texture information obtained from the matched image to the mesh model. The disclosed may be used for a 3D map service.
    Type: Application
    Filed: September 14, 2012
    Publication date: August 15, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Chang Woo CHU, IL Kyu PARK, Young Mi CHA, Ji Hyung LEE, Bon Ki KOO
  • Patent number: 8472700
    Abstract: A method for creating a 3D face model by using multi-view image information, includes: creating a mesh structure for expressing an appearance of a 3D face model by using a first multi-view image obtained by capturing an expressionless face of a performer; and locating joints of a hierarchical structure in the mesh structure, by using a second multi-view image obtained by capturing an expression performance of the performer. Further, the method includes creating a 3D face model that is animated to enable reproduction of a natural expression of the performer, by setting dependency between the joints and the mesh structure.
    Type: Grant
    Filed: December 15, 2008
    Date of Patent: June 25, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho Won Kim, Bon Ki Koo, Chang Woo Chu, Seong Jae Lim, Jeung Chul Park, Ji Young Park
  • Patent number: 8462149
    Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.
    Type: Grant
    Filed: April 16, 2009
    Date of Patent: June 11, 2013
    Assignees: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
  • Patent number: 8421870
    Abstract: Disclosed are an apparatus and a method for automatic control of multiple cameras capable of supporting an effective camera view angle in a broadcast, a movie, etc. The automatic control apparatus of multiple cameras includes: a first main camera; a first camera driver controlling an operation of the first main camera; a second main camera; a second camera driver controlling an operation of the second main camera; at least one auxiliary camera; at least one third camera driver controlling an operation of the at least one auxiliary camera; and an interoperation processor changing a view angle of the at least one auxiliary camera by controlling the at least one third camera driver in accordance with a view angle changing reference changed by changing the view angle of the first main camera, the second main camera, or the first and second main cameras.
    Type: Grant
    Filed: August 25, 2010
    Date of Patent: April 16, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hyun Kang, Ho Won Kim, Chang Woo Chu, Bon Ki Koo
  • Publication number: 20130089259
    Abstract: A space segmentation method for 3D point clouds is disclosed. A space segmentation method for 3D point clouds includes equally segmenting a space of the 3D point clouds into a plurality of grid cells; establishing a base plane corresponding to a ground part of the space of the 3D point clouds; accumulating points of all grid cells located perpendicular to the base plane in a grid cell of the base plane; and segmenting the grid cell in which the points are accumulated into an object part and a ground part according to the number of accumulated points.
    Type: Application
    Filed: August 31, 2012
    Publication date: April 11, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Youngmi CHA, Chang Woo Chu, Il Kyu Park, Ji Hyung Lee, Bonki Koo
  • Patent number: 8417034
    Abstract: Disclosed herein is an apparatus and method for separating a foreground and a background. The apparatus includes a background model creation unit for creating a code book including a plurality of code words in order to separate the foreground and the background, and a foreground/background separation unit for separating the foreground and the background using the created code book. The method includes the steps of creating a code book including a plurality of code words in order to separate the foreground and the background, rearranging the cord words of the created code book on the basis of the number of sample data that belong to each of the code words, and separating the foreground and the background using the code book.
    Type: Grant
    Filed: August 5, 2009
    Date of Patent: April 9, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kap Kee Kim, Bon Woo Hwang, Ji Hyung Lee, Jin Seo Kim, Seung Wook Lee, Chang Woo Chu, Ho Won Kim, Bon Ki Koo, Gil haeng Lee
  • Publication number: 20130057653
    Abstract: A method for rendering point cloud using a voxel grid, includes generating bounding box including all the point cloud and dividing the generated bounding box into voxels to make the voxel grid; and allocating at least one texture plane to each of the voxels of the voxel grid. Further, the method includes orthogonally projecting points within the voxel to the allocated texture planes to generate texture images; and rendering each voxel of the voxel grid by selecting one of the texture planes within the voxel by using central position of the voxel and the 3D camera position and rendering using the texture images corresponding to the selected texture plane.
    Type: Application
    Filed: July 25, 2012
    Publication date: March 7, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Il Kyu PARK, Chang Woo CHU, Youngmi CHA, Ji Hyung LEE, Bonki KOO
  • Patent number: 8270704
    Abstract: A method for reconstructing a 3D shape model of an object by using multi-view image information, includes: inputting multi-view images obtained by photographing the object from multiple viewpoints in a voxel space, and extracting silhouette information and color information of the multi-view images; reconstructing visual hulls by silhouette intersection using the silhouette information; and approximating polygons of cross-sections of the visual hulls to a natural geometric shape of the object by using the color information. Further, the method includes expressing a 3D geometric shape of the object by connecting the approximated polygons to create a mesh structure; extracting color textures of a surface of the object by projecting meshes of the mesh structure to the multi-view image; and creating a 3D shape model by modeling natural shape information and surface color information of the object.
    Type: Grant
    Filed: December 15, 2008
    Date of Patent: September 18, 2012
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho Won Kim, Chang Woo Chu, Bon Ki Koo
  • Patent number: 8259102
    Abstract: A method for producing a 3D facial animation using a single facial video stream, includes producing a representative joint focused on a major expression producing element in a 3D standard facial model, producing a statistical feature-point model of various facial expressions of different people in the 3D standard facial model, moving each feature-point of the statistical feature-point model by tracking a change in facial expressions of the video stream, calculating a transformation coefficient of the representative joint corresponding to a variation of the feature-point of the 3D standard facial model, and producing a 3D facial animation by applying the calculated transformation coefficient to transform the representative joint.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: September 4, 2012
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seong Jae Lim, Jeung Chul Park, Chang Woo Chu, Ho Won Kim, Ji Young Park, Bon Ki Koo