Patents by Inventor Jeung Chul Park

Jeung Chul Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11876865
    Abstract: The present disclosure relates to a method of and an apparatus for generating a space for sharing augmented reality content among multiple participants. With the method and the apparatus, an object for configuring a virtual space, which is proposed in an actual space, can also be proposed, in a spatial matching manner on a per-user basis, inside a common sharing space in augmented reality content by utilizing the technology of generating a space for sharing augmented reality content among multiple participants.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: January 16, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Beom Ryeol Lee, Jeung Chul Park, Wook Ho Son, Yong Ho Lee
  • Publication number: 20230237741
    Abstract: Provided is a method and device for outputting a large-capacity 3D model for an augmented reality (AR) device. A method of outputting a large-capacity 3D model for an AR device includes generating a multi-texture and a 3D mesh based on a multi-view image, generating a 3D model using the multi-texture and the 3D mesh, and transmitting, to the AR device, an image of the 3D model in a view, to which a camera of the AR device is directed, according to camera movement and rotation information of the AR device, and the AR device outputs the image in the view, to which the camera is directed.
    Type: Application
    Filed: October 26, 2022
    Publication date: July 27, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeung Chul PARK, Wookho SON, Beom Ryeol LEE, Yongho LEE
  • Patent number: 9961330
    Abstract: Provided is a method of generating multi-view immersive content. The method includes obtaining a multi-view background image from a plurality of cameras arranged in a curved shape, modeling the obtained multi-view background image to generate a codebook corresponding to the multi-view background image, obtaining a multi-view image including an object from the plurality of cameras and separating a foreground and a background from the obtained multi-view image by using the generated codebook, and synthesizing the object included in the separated foreground with a virtual background to generate multi-view immersive content.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: May 1, 2018
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeung Chul Park, Hyung Jae Son, Beom Ryeol Lee, Il Kwon Jeong
  • Publication number: 20170180717
    Abstract: Disclosed herein are a bilateral distance automatic control display apparatus providing super multiview images and a method therefor regardless of a difference in a bilateral distance of a viewer in displaying the super multiview images, in which the bilateral distance automatic control display apparatus may include a projection module and a fresnel lens. The present invention has been made in an effort to provide a bilateral distance automatic control display apparatus providing super multiview images and a method therefor having advantages of visually providing a higher quality of super multiview images to viewer's eyes by widening an observable viewing zone of the super multiview images in a left and right direction when the super multiview images is provided to the viewer by using super multiview images display, thereby providing the environment that the viewer may conveniently experience characteristics of the super multiview images.
    Type: Application
    Filed: December 21, 2016
    Publication date: June 22, 2017
    Inventors: Beom Ryeol LEE, Jin Ryong KIM, Jeung Chul PARK, Il Kwon JEONG, Gi Su HEO
  • Patent number: 9619936
    Abstract: A method and apparatus for quickly generating a natural appearing terrain image. The method according to an exemplary embodiment may include generating a new terrain image through a patch-based synthesis from one or more virtual noise-based terrain models and one or more realistic terrain models; and processing blending of a boundary between synthesized terrain images in the newly generated terrain image.
    Type: Grant
    Filed: January 12, 2015
    Date of Patent: April 11, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jeung Chul Park, Il Kwon Jeong
  • Publication number: 20160261855
    Abstract: Provided is a method of generating multi-view immersive content. The method includes obtaining a multi-view background image from a plurality of cameras arranged in a curved shape, modeling the obtained multi-view background image to generate a codebook corresponding to the multi-view background image, obtaining a multi-view image including an object from the plurality of cameras and separating a foreground and a background from the obtained multi-view image by using the generated codebook, and synthesizing the object included in the separated foreground with a virtual background to generate multi-view immersive content.
    Type: Application
    Filed: March 2, 2016
    Publication date: September 8, 2016
    Inventors: Jeung Chul PARK, Hyung Jae SON, Beom Ryeol LEE, Il Kwon JEONG
  • Publication number: 20160150224
    Abstract: A super multi-viewpoint image generating device includes a host system, a storage unit, first and second liquid crystal display (LCD) projection lamps, a synchronization unit, first and second LCD shutters, first and second projection objectives, and an image light combination unit, according to the present invention, a super multi-viewpoint image where interference between adjacent viewpoint images is removed by a viewing zone-based synchronized shutter timing may be generated, thereby providing an environment for generating a super multi-viewpoint image having a realistic level, establishing a method of generating a viewing zone-based super multi-viewpoint image, and displaying a super multi-viewpoint image.
    Type: Application
    Filed: November 12, 2015
    Publication date: May 26, 2016
    Inventors: Beom Ryeol LEE, Jeung Chul PARK, Hyung Jae SON, Il Kwon JEONG
  • Publication number: 20150281681
    Abstract: There are provided a device for projecting a super multi-view image that provides an image of two views to a user's pupil and a method thereof. A device for projecting a super multi-view image according to an embodiment of the present invention includes an operating unit configured to receive super multi-view image content and transmit the received super multi-view image content and a driving signal, and a control unit driven by the driving signal received from the operating unit and configured to divide the received super multi-view image content into a plurality of single-view images, load the divided single-view images in a high speed image display device, and transmit an open command signal to an active shutter corresponding to the single-view image loaded in the high speed image display device among an active shutter array.
    Type: Application
    Filed: February 27, 2015
    Publication date: October 1, 2015
    Inventors: Beom Ryeol LEE, Jeung Chul PARK, Il Kwon JEONG
  • Publication number: 20150262392
    Abstract: A method and apparatus for quickly generating a natural terrain. The method according to an exemplary embodiment may include generating a new terrain through a patch-based synthesis from one or more virtual noise-based terrain models and one or more realistic terrain models; and processing blending of a boundary between synthesized terrains in the newly generated terrain.
    Type: Application
    Filed: January 12, 2015
    Publication date: September 17, 2015
    Inventors: Jeung Chul PARK, Il Kwon JEONG
  • Publication number: 20130201188
    Abstract: Disclosed is an apparatus and method for generating a pre-visualization image supporting functions of simulating interactions between the digital actor motion, the virtual space, and the virtual shooting device motion in an actual space and previewing the image by using a virtual camera and a virtual space including a 3D digital actor in an image production operation. Thus, according to the present invention, it is possible to support more effective image production.
    Type: Application
    Filed: August 14, 2012
    Publication date: August 8, 2013
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Yoon Seok CHOI, Do Hyung KIM, Jeung Chul PARK, Ji Hyung LEE, Bon Ki KOO
  • Patent number: 8472700
    Abstract: A method for creating a 3D face model by using multi-view image information, includes: creating a mesh structure for expressing an appearance of a 3D face model by using a first multi-view image obtained by capturing an expressionless face of a performer; and locating joints of a hierarchical structure in the mesh structure, by using a second multi-view image obtained by capturing an expression performance of the performer. Further, the method includes creating a 3D face model that is animated to enable reproduction of a natural expression of the performer, by setting dependency between the joints and the mesh structure.
    Type: Grant
    Filed: December 15, 2008
    Date of Patent: June 25, 2013
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ho Won Kim, Bon Ki Koo, Chang Woo Chu, Seong Jae Lim, Jeung Chul Park, Ji Young Park
  • Patent number: 8462149
    Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.
    Type: Grant
    Filed: April 16, 2009
    Date of Patent: June 11, 2013
    Assignees: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
  • Patent number: 8259102
    Abstract: A method for producing a 3D facial animation using a single facial video stream, includes producing a representative joint focused on a major expression producing element in a 3D standard facial model, producing a statistical feature-point model of various facial expressions of different people in the 3D standard facial model, moving each feature-point of the statistical feature-point model by tracking a change in facial expressions of the video stream, calculating a transformation coefficient of the representative joint corresponding to a variation of the feature-point of the 3D standard facial model, and producing a 3D facial animation by applying the calculated transformation coefficient to transform the representative joint.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: September 4, 2012
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seong Jae Lim, Jeung Chul Park, Chang Woo Chu, Ho Won Kim, Ji Young Park, Bon Ki Koo
  • Patent number: 8238648
    Abstract: Provided is a camera self-calibration method that calculates a focal length of a fixed zoom lens camera from a correspondence point position between images. In the method, a cost function, which is a function of a focal length, is defined, and a focal length that minimizes the defined cost function is obtained to obtain a focal length that allows 3D recovery results of correspondence points calculated from all image pairs coincide with one another. Therefore, reliability of the calculated focal length can be easily verified, and the focal length of the camera can be stably calculated even when the position of input correspondence point is inaccurately given.
    Type: Grant
    Filed: November 27, 2007
    Date of Patent: August 7, 2012
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jae Chul Kim, Chang Woo Chu, Ho Won Kim, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Bon Ki Koo
  • Publication number: 20120154393
    Abstract: Disclosed herein are an apparatus and method for creating animation by capturing the motions of a non-rigid object. The apparatus includes a geometry mesh reconstruction unit, a motion capture unit, and a content creation unit. The geometry mesh reconstruction unit receives moving images captured by a plurality of cameras, and generates a reconstruction mesh set for each frame. The motion capture unit generates mesh graph sets for the reconstruction mesh set and generates motion data, including motion information, using the mesh graph sets. The content creation unit creates three-dimensional (3D) content for a non-rigid object by generating a final transformation mesh set, having a topology similar to that of the reconstruction mesh set, using the motion data.
    Type: Application
    Filed: December 21, 2011
    Publication date: June 21, 2012
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ji-Hyung LEE, Bon-Ki Koo, Yoon-Seok Choi, Jeung-Chul Park, Do-Hyung Kim, Bon-Woo Hwang, Kap-Kee Kim, Seong-Jae Lim, Han-Byul Joo, Seung-Uk Yoon
  • Publication number: 20110148874
    Abstract: Disclosed herein is a method and system for transforming the muscles of a character model. The muscles of a target model are created using the muscle information of a reference model. The system includes a reference model processor and a target model processor. The reference model processor creates a reference feature volume, that is, a 3D geometric shape, based on the skeleton and appearance information of the reference model, and subordinates the muscle information of the reference model to the feature volume. The target model processor deforms the reference feature volume to be suitable for the target model, and applies the muscle information of the reference model to the target model, thereby creating muscles for the target model based on the extent of the deformation of the reference feature volume.
    Type: Application
    Filed: May 19, 2010
    Publication date: June 23, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Young Mi CHA, II Kyu Park, Jeung Chul Park, Ji Hyung Lee, Bon Ki Koo
  • Publication number: 20110148864
    Abstract: Disclosed herein is a method and apparatus for creating a 3D avatar. The method of creating a three dimensional (3D) avatar includes receiving body information of a user and storing the body information in a DataBase (DB), and creating a 3D avatar for the user by modifying standard data, predetermined based on body information about various persons and stored in the DB, based on the body information of the user.
    Type: Application
    Filed: December 10, 2010
    Publication date: June 23, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Ji-Hyung LEE, Yoon-Seok Choi, Do-Hyung Kim, Il-Kyu Park, Young-Mi Cha, Jeung-Chul Park, Bon-Ki Koo
  • Publication number: 20110142435
    Abstract: Provided are an apparatus and method for estimating reflectance and diffuse elements as optical properties of skin to perform exact rendering on human skin. The apparatus includes a light source device equipping a plurality of light sources that control directions toward an object, a control unit controlling sequential switching of the light sources, and a photographing unit photographing images of the object. The photographing unit is a DSLR camera providing a video photographing function. The entire light sources are controlled to be sequentially switched for a second correspondingly to the number of frames of the photographing unit per second. The control unit and the photographing unit are controlled by a computer. The computer repeats operations of requesting to the control unit to perform a lighting operation, transmitting an image acquisition command to the photographing unit, and requesting the control unit to perform a lighting operation again.
    Type: Application
    Filed: June 4, 2010
    Publication date: June 16, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jeung Chul PARK, Bon Ki KOO
  • Publication number: 20110123168
    Abstract: A multimedia application system uses metadata for sensory devices. The system includes: a sensory-device engine for generating a sensory device command (SDC) for controlling the sensory devices based on sensory effect information (SEI) generated to represent sensory effects by using the sensory devices depending on video contents, user preference information (UPI) of the sensory devices and device capability information (DCI) indicative of reproducing capability of the sensory devices; and a sensory-device controller for controlling sensory devices to perform sensory effect reproduction in response to the generated SDC.
    Type: Application
    Filed: June 19, 2009
    Publication date: May 26, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Maeng Sub Cho, Jin Seo Kim, Bon Ki Koo, Ji Hyung Lee, Chang Woo Chu, Ho Won Kim, II Kyu Park, Yoon-Seok Choi, Ji Young Park, Seong Jae Lim, Bon Woo Hwang, Jeung Chul Park, Kap Kee Kim, Sang-Kyun Kim, Yong-Soo Joo
  • Publication number: 20110046923
    Abstract: An apparatus for compressing low-complexity 3D mesh, includes: a data analyzing unit for decomposing data of an input 3D mesh model into vertices information, property information representing property of the 3D mesh model, and connectivity information between vertices constituting the 3D mesh model; a mesh model quantizing unit for producing quantized vertices, property and connectivity information of the 3D mesh model by using the vertices, property and connectivity information; and a sharable vertex analysis unit for analyzing sharing information between shared vertices of the 3D mesh model. Further, the apparatus includes a data modulation unit for performing a circular DPCM prediction by using quantized values of the consecutive connectivity information of the 3D mesh model; and an entropy encoding unit for outputting coded data of the quantized vertices and property information, and differential pulse-code modulated connectivity information as a bitstream.
    Type: Application
    Filed: April 6, 2009
    Publication date: February 24, 2011
    Applicants: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang University
    Inventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son