Patents by Inventor Eun Young Chang

Eun Young Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8588515
    Abstract: A method and apparatus for enhancing quality of a depth image are provided. A method for enhancing quality of a depth image includes: receiving a multi-view image including a left image, a right image, and a center image; receiving a current depth image frame and a previous depth image frame of the current depth image frame; setting an intensity difference value corresponding to a specific disparity value of the current depth image frame by using the current depth image frame and the previous depth image frame; setting a disparity value range including the specific disparity value; and setting an intensity difference value corresponding to the disparity value range of the current depth image frame by using the multi-viewpoint image.
    Type: Grant
    Filed: January 28, 2010
    Date of Patent: November 19, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gun Bang, Gi-Mun Um, Eun-Young Chang, Taeone Kim, Nam-Ho Hur, Jin-Woong Kim, Soo-In Lee
  • Publication number: 20130083165
    Abstract: Provided are an apparatus and method for extracting a texture image and a depth image. The apparatus of the present invention may project a pattern image on a target object, may capture a scene image on which the target image is reflected from the target object, and may simultaneously extract the texture image and the depth image using the scene image.
    Type: Application
    Filed: December 8, 2010
    Publication date: April 4, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Woong Kim, Roger Blanco Ribera, Tae One Kim, Eun Young Chang, Wook Joong Kim, Nam Ho Hur, Soo In Lee, Seung Ku Hwang
  • Publication number: 20120154533
    Abstract: A device for creating multi-view video contents includes a virtual view position and distribution unit that calculates a plurality of virtual views corresponding to the number of predetermined virtual views based on input information and distributes the calculated positions of the plurality of virtual views to a plurality of view synthesis processing units; a view synthesis processor that operates the plurality of view synthesis processing units in parallel, allows each of the view synthesis processing units to create at least one virtual view video corresponding to a position of at least one virtual view distributed from the virtual view position calculation and distribution unit, and performs partial multiplexing based on at least one created virtual view video; and a video integration unit that integrates a plurality of partially multiplexed videos output from the plurality of view synthesis processing units.
    Type: Application
    Filed: December 16, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang SHIN, Gun BANG, Gi-Mun UM, Tae One KIM, Eun Young CHANG, Nam Ho HUR, Soo In LEE
  • Publication number: 20120154517
    Abstract: A method and a device for adjusting depth perception, a terminal including a function for adjusting depth perception and a method for operating the terminal are provided. The method for adjusting depth perception includes: obtaining color and depth videos of a user; detecting a user's position based on the obtained depth video of the user; calculating a range of maximum and minimum depths in a 3-dimensional (3D) video according to the detected user's position; and adjusting a left and right stereo video generating interval of the 3D video to be rendered so as to satisfy the calculated range of the maximum and minimum depths. Therefore, during a 3D or multi-view video call, the 3D video having a three-dimensional effect optimized according to the user's position may be provided.
    Type: Application
    Filed: December 7, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gi-Mun UM, Gun BANG, Tae One KIM, Eun Young CHANG, Hong Chang SHIN, Nam Ho HUR, Soo In LEE
  • Publication number: 20120148173
    Abstract: Method and device for generating a multi-viewpoint image are provided. The method of generating a multi-viewpoint image includes the steps of: acquiring at least one reference-viewpoint image; generating unit image information of a virtual-viewpoint image on the basis of unit image information of the reference-viewpoint image; multiplexing the unit image information of the reference-viewpoint image and the unit image information of the virtual-viewpoint image; and generating a multi-viewpoint image by performing an interpolation process on occluded areas between the multiplexed unit image information using the multiplexed unit image information. As a result, it is possible to avoid unnecessary processes of completing and rearranging individual viewpoint images in the course of generating a multi-viewpoint image.
    Type: Application
    Filed: December 8, 2011
    Publication date: June 14, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang SHIN, Gun BANG, Gi-Mun UM, Tae One KIM, Eun Young CHANG, Nam Ho HUR, Soo In LEE
  • Patent number: 8131094
    Abstract: A method and apparatus for encoding and decoding three-dimensional mesh information are provided. The method and apparatus separately encode/decode order information of elements, such as vertices and faces, of a three-dimensional mesh model (original model) in consideration of a change in an element order during encoding three-dimensional mesh information for the original model. The method for encoding three-dimensional mesh information includes the steps of: encoding the three-dimensional mesh information and outputting an encoded bit-stream; calculating order information of at least one element in an original model contained in the three-dimensional mesh information; encoding the element order information; and generating packets of the encoded bit-stream, wherein the encoded element order information is inserted into the packet.
    Type: Grant
    Filed: April 13, 2006
    Date of Patent: March 6, 2012
    Assignees: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation, Hanyang University
    Inventors: Eun Young Chang, Nam Ho Hur, Soo In Lee, Euee Seon Jang, Dai Yong Kim, Byeong Wook Min, Sun Young Lee
  • Publication number: 20110261883
    Abstract: Provided is a multi-view video coding/decoding method and apparatus which uses coded and decoded multi-view videos to code and decode depth information videos corresponding to the multi-view videos. The multi-view video coding method includes: controlling the scales of first and second depth information videos corresponding to a multi-view video such that the scales are equalized; and coding the second depth information video, of which the scale is controlled, by referring to the first depth information video of which the scale is controlled.
    Type: Application
    Filed: December 8, 2009
    Publication date: October 27, 2011
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Gun Bang, Gi-Mun Um, Taeone Kim, Eun-Young Chang, Namho Hur, Jin-Woong Kim, Soo-In Lee
  • Publication number: 20110149031
    Abstract: Provided is a camera apparatus, including: a camera unit including at least two cameras to acquire a stereoscopic image; a camera control unit to adjust a view angle of the at least two cameras and an interval between the at least two cameras; a depth information obtainment unit to obtain depth information from the stereoscopic image; and an intermediate image generator to generate an intermediate image based on the stereoscopic image and the depth information.
    Type: Application
    Filed: August 17, 2010
    Publication date: June 23, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gi Mun UM, Gun Bang, Won-Sik CHEONG, Hong-Chang SHIN, Taeone KIM, Eun Young CHANG, Namho HUR, Jin Woong KIM, Soo In LEE
  • Publication number: 20100315490
    Abstract: An apparatus for generating depth information, includes: a projector configured to project a predetermined pattern to an object to be photographed; a left camera configured to acquire a left image of a structured light image which is generated by projecting the predetermined pattern to the object; a right camera configured to acquire a right image of the structured light image; a depth information generating unit configured to determine correspondence points based on the left image, the right image and the structured light pattern, to generate depth information of the image, to determine the depth information by applying a stereo matching method to the left image and the right image when the structured light pattern cannot be applied to a field of the image, and to generate depth information of entire image based on the acquired depth information.
    Type: Application
    Filed: January 19, 2010
    Publication date: December 16, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Taeone KIM, Namho HUR, Jin-Woong KIM, Gi-Mun UM, Gun BANG, Eun-Young CHANG
  • Publication number: 20100195898
    Abstract: A method and apparatus for enhancing quality of a depth image are provided. A method for enhancing quality of a depth image includes: receiving a multi-view image including a left image, a right image, and a center image; receiving a current depth image frame and a previous depth image frame of the current depth image frame; setting an intensity difference value corresponding to a specific disparity value of the current depth image frame by using the current depth image frame and the previous depth image frame; setting a disparity value range including the specific disparity value; and setting an intensity difference value corresponding to the disparity value range of the current depth image frame by using the multi-viewpoint image.
    Type: Application
    Filed: January 28, 2010
    Publication date: August 5, 2010
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Gun BANG, Gi-Mun UM, Eun-Young CHANG, Taeone KIM, Nam-Ho HUR, Jin-Woong KIM, Soo-In LEE
  • Publication number: 20090278844
    Abstract: Provided are encoding and decoding stitching information generated when the 3D mesh model that is the non-manifold or non-orientable model is converted into orientable and manifold models upon encoding the 3D mesh information of the 3D mesh model. The method of encoding 3D mesh information includes the steps of: encoding the 3D mesh information to output an encoded bitstream; encoding stitching information of elements contained in the 3D mesh information, the stitching information having a total number of duplicated original elements and, for each of the duplicated original elements, the duplication number, original-element identification information and duplicated-element identification information; and generating a packet of the bitstream, wherein the encoded stitching information is inserted into the packet.
    Type: Application
    Filed: January 11, 2007
    Publication date: November 12, 2009
    Inventors: Eun Young Chang, Nam Ho Hur, Jin Woong Kim, Soo In Lee, Euee Seon Jang, Sin Wook Lee, Jae Bum Jun
  • Publication number: 20090263029
    Abstract: A method and apparatus for encoding and decoding three-dimensional mesh information are provided. The method and apparatus separately encode/decode order information of elements, such as vertices and faces, of a three-dimensional mesh model (original model) in consideration of a change in an element order during encoding three-dimensional mesh information for the original model. The method for encoding three-dimensional mesh information includes the steps of: encoding the three-dimensional mesh information and outputting an encoded bit-stream; calculating order information of at least one element in an original model contained in the three-dimensional mesh information; encoding the element order information; and generating packets of the encoded bit-stream, wherein the encoded element order information is inserted into the packet.
    Type: Application
    Filed: April 13, 2006
    Publication date: October 22, 2009
    Inventors: Eun Young Chang, Nam Ho Hur, Soo In Lee, Euee Seon Jang, Dai Yong Kim, Byeong Wook Min, Sun Young Lee
  • Publication number: 20090080516
    Abstract: Provided is a method of encoding and decoding texture coordinates of 3D mesh information. The method of encoding texture coordinates in 3D mesh information includes the steps of: setting an adaptive quantization step size used for quantizing the texture coordinates; quantizing the texture coordinates using the adaptive quantization step size; and encoding the quantized texture coordinates.
    Type: Application
    Filed: January 13, 2006
    Publication date: March 26, 2009
    Inventors: Eun Young Chang, Chung Hyun Ahn, Euee Seon Jang, Mi Ja Kim, Dai Yong Kim, Sun Young Lee
  • Publication number: 20070296721
    Abstract: Provided are a contents generating apparatus that can support functions of moving object substitution, depth-based object insertion, background image substitution, and view offering upon a user request and provide realistic image by applying lighting information applied to a real image to computer graphics object when a real image is composited with computer graphics object, and a contents generating method thereof. The apparatus includes: a preprocessing block, a camera calibration block, a scene model generating block, an object extracting/tracing block, a real image/computer graphics object compositing block, an image generating block, and the user interface block. The present invention can provide diverse production methods such as testing for the optimal camera viewpoint and scenic structure before contents are actually authored and compositing two different scenes taken in different places into one scene based on a concept of a three-dimensional virtual studio in the respect of a contents producer.
    Type: Application
    Filed: July 26, 2005
    Publication date: December 27, 2007
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Eun-Young Chang, Gi-Mun Um, Daehee Kim, Chung-Hyun Ahn, Soo-In Lee