Patents by Inventor Jeung Chul Park
Jeung Chul Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240324925Abstract: Disclosed herein is an apparatus for analyzing efficiency of virtual task performance of a user interacting with eXtended Reality (XR). The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program may perform generating user interaction feature information from sensor information of a virtual reality (VR) device, calculating the quality of experience of a user as the values of multiple experience indices based on the feature information by applying a machine-learning model, and evaluating an experience based on a result of mapping the values of the multiple experience indices to generated metrics in order to analyze the effectiveness of the VR experience of the user.Type: ApplicationFiled: March 29, 2024Publication date: October 3, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Wook-Ho SON, Jeung-Chul PARK, Beom-Ryeol LEE, Yong-Ho LEE
-
Publication number: 20240242407Abstract: Disclosed herein are a text recognition method and apparatus based on hand interaction for AR glasses. The text recognition method based on hand interaction for AR glasses includes collecting RGB images, extracting hand joint information from the RGB images, generating a text image based on the hand joint information, recognizing text from the text image, and outputting the recognized text.Type: ApplicationFiled: September 13, 2023Publication date: July 18, 2024Inventors: Jong-Bae LEE, Jeung-Chul PARK, Wook-Ho SON, Beom-Ryeol LEE, Yong-Ho LEE
-
Patent number: 11876865Abstract: The present disclosure relates to a method of and an apparatus for generating a space for sharing augmented reality content among multiple participants. With the method and the apparatus, an object for configuring a virtual space, which is proposed in an actual space, can also be proposed, in a spatial matching manner on a per-user basis, inside a common sharing space in augmented reality content by utilizing the technology of generating a space for sharing augmented reality content among multiple participants.Type: GrantFiled: September 27, 2022Date of Patent: January 16, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Beom Ryeol Lee, Jeung Chul Park, Wook Ho Son, Yong Ho Lee
-
Publication number: 20230237741Abstract: Provided is a method and device for outputting a large-capacity 3D model for an augmented reality (AR) device. A method of outputting a large-capacity 3D model for an AR device includes generating a multi-texture and a 3D mesh based on a multi-view image, generating a 3D model using the multi-texture and the 3D mesh, and transmitting, to the AR device, an image of the 3D model in a view, to which a camera of the AR device is directed, according to camera movement and rotation information of the AR device, and the AR device outputs the image in the view, to which the camera is directed.Type: ApplicationFiled: October 26, 2022Publication date: July 27, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeung Chul PARK, Wookho SON, Beom Ryeol LEE, Yongho LEE
-
Patent number: 9961330Abstract: Provided is a method of generating multi-view immersive content. The method includes obtaining a multi-view background image from a plurality of cameras arranged in a curved shape, modeling the obtained multi-view background image to generate a codebook corresponding to the multi-view background image, obtaining a multi-view image including an object from the plurality of cameras and separating a foreground and a background from the obtained multi-view image by using the generated codebook, and synthesizing the object included in the separated foreground with a virtual background to generate multi-view immersive content.Type: GrantFiled: March 2, 2016Date of Patent: May 1, 2018Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeung Chul Park, Hyung Jae Son, Beom Ryeol Lee, Il Kwon Jeong
-
Publication number: 20170180717Abstract: Disclosed herein are a bilateral distance automatic control display apparatus providing super multiview images and a method therefor regardless of a difference in a bilateral distance of a viewer in displaying the super multiview images, in which the bilateral distance automatic control display apparatus may include a projection module and a fresnel lens. The present invention has been made in an effort to provide a bilateral distance automatic control display apparatus providing super multiview images and a method therefor having advantages of visually providing a higher quality of super multiview images to viewer's eyes by widening an observable viewing zone of the super multiview images in a left and right direction when the super multiview images is provided to the viewer by using super multiview images display, thereby providing the environment that the viewer may conveniently experience characteristics of the super multiview images.Type: ApplicationFiled: December 21, 2016Publication date: June 22, 2017Inventors: Beom Ryeol LEE, Jin Ryong KIM, Jeung Chul PARK, Il Kwon JEONG, Gi Su HEO
-
Patent number: 9619936Abstract: A method and apparatus for quickly generating a natural appearing terrain image. The method according to an exemplary embodiment may include generating a new terrain image through a patch-based synthesis from one or more virtual noise-based terrain models and one or more realistic terrain models; and processing blending of a boundary between synthesized terrain images in the newly generated terrain image.Type: GrantFiled: January 12, 2015Date of Patent: April 11, 2017Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jeung Chul Park, Il Kwon Jeong
-
Publication number: 20160261855Abstract: Provided is a method of generating multi-view immersive content. The method includes obtaining a multi-view background image from a plurality of cameras arranged in a curved shape, modeling the obtained multi-view background image to generate a codebook corresponding to the multi-view background image, obtaining a multi-view image including an object from the plurality of cameras and separating a foreground and a background from the obtained multi-view image by using the generated codebook, and synthesizing the object included in the separated foreground with a virtual background to generate multi-view immersive content.Type: ApplicationFiled: March 2, 2016Publication date: September 8, 2016Inventors: Jeung Chul PARK, Hyung Jae SON, Beom Ryeol LEE, Il Kwon JEONG
-
Publication number: 20160150224Abstract: A super multi-viewpoint image generating device includes a host system, a storage unit, first and second liquid crystal display (LCD) projection lamps, a synchronization unit, first and second LCD shutters, first and second projection objectives, and an image light combination unit, according to the present invention, a super multi-viewpoint image where interference between adjacent viewpoint images is removed by a viewing zone-based synchronized shutter timing may be generated, thereby providing an environment for generating a super multi-viewpoint image having a realistic level, establishing a method of generating a viewing zone-based super multi-viewpoint image, and displaying a super multi-viewpoint image.Type: ApplicationFiled: November 12, 2015Publication date: May 26, 2016Inventors: Beom Ryeol LEE, Jeung Chul PARK, Hyung Jae SON, Il Kwon JEONG
-
Publication number: 20150281681Abstract: There are provided a device for projecting a super multi-view image that provides an image of two views to a user's pupil and a method thereof. A device for projecting a super multi-view image according to an embodiment of the present invention includes an operating unit configured to receive super multi-view image content and transmit the received super multi-view image content and a driving signal, and a control unit driven by the driving signal received from the operating unit and configured to divide the received super multi-view image content into a plurality of single-view images, load the divided single-view images in a high speed image display device, and transmit an open command signal to an active shutter corresponding to the single-view image loaded in the high speed image display device among an active shutter array.Type: ApplicationFiled: February 27, 2015Publication date: October 1, 2015Inventors: Beom Ryeol LEE, Jeung Chul PARK, Il Kwon JEONG
-
Publication number: 20150262392Abstract: A method and apparatus for quickly generating a natural terrain. The method according to an exemplary embodiment may include generating a new terrain through a patch-based synthesis from one or more virtual noise-based terrain models and one or more realistic terrain models; and processing blending of a boundary between synthesized terrains in the newly generated terrain.Type: ApplicationFiled: January 12, 2015Publication date: September 17, 2015Inventors: Jeung Chul PARK, Il Kwon JEONG
-
Publication number: 20130201188Abstract: Disclosed is an apparatus and method for generating a pre-visualization image supporting functions of simulating interactions between the digital actor motion, the virtual space, and the virtual shooting device motion in an actual space and previewing the image by using a virtual camera and a virtual space including a 3D digital actor in an image production operation. Thus, according to the present invention, it is possible to support more effective image production.Type: ApplicationFiled: August 14, 2012Publication date: August 8, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Yoon Seok CHOI, Do Hyung KIM, Jeung Chul PARK, Ji Hyung LEE, Bon Ki KOO
-
Patent number: 8472700Abstract: A method for creating a 3D face model by using multi-view image information, includes: creating a mesh structure for expressing an appearance of a 3D face model by using a first multi-view image obtained by capturing an expressionless face of a performer; and locating joints of a hierarchical structure in the mesh structure, by using a second multi-view image obtained by capturing an expression performance of the performer. Further, the method includes creating a 3D face model that is animated to enable reproduction of a natural expression of the performer, by setting dependency between the joints and the mesh structure.Type: GrantFiled: December 15, 2008Date of Patent: June 25, 2013Assignee: Electronics and Telecommunications Research InstituteInventors: Ho Won Kim, Bon Ki Koo, Chang Woo Chu, Seong Jae Lim, Jeung Chul Park, Ji Young Park
-
Patent number: 8462149Abstract: An apparatus for 3D mesh compression based on quantization, includes a data analyzing unit (510) for decomposing data of an input 3D mesh model into vertices information (511) property information (512) representing property of the 3D mesh model, and connectivity information (515) between vertices constituting the 3D mesh model: and a mesh model quantizing unit (520) for producing quantized vertices and property information of the 3D mesh model by using the vertices, property and connectivity information (511, 512, 513). Further, the apparatus for 3D mesh compression based on quantization includes a decision bit encoding unit (535) for calculating a decision bit by using the quantized connectivity information and then encoding the quantized vertex information, property information and connectivity information (511, 512, 513) by using the decision bit.Type: GrantFiled: April 16, 2009Date of Patent: June 11, 2013Assignees: Electronics and Telecommunications Research Institute, Industry-University Cooperation Foundation Hanyang UniversityInventors: Seung Wook Lee, Bon Ki Koo, Jin Seo Kim, Young Jik Lee, Ji Hyung Lee, Ho Won Kim, Chang Woo Chu, Bon Woo Hwang, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Il Kyu Park, Yoon-Seok Choi, Kap Kee Kim, Euee Seon Jang, Daiyong Kim, Byoungjun Kim, Jaebum Jun, Giseok Son, Kyoung Soo Son
-
Patent number: 8259102Abstract: A method for producing a 3D facial animation using a single facial video stream, includes producing a representative joint focused on a major expression producing element in a 3D standard facial model, producing a statistical feature-point model of various facial expressions of different people in the 3D standard facial model, moving each feature-point of the statistical feature-point model by tracking a change in facial expressions of the video stream, calculating a transformation coefficient of the representative joint corresponding to a variation of the feature-point of the 3D standard facial model, and producing a 3D facial animation by applying the calculated transformation coefficient to transform the representative joint.Type: GrantFiled: December 16, 2008Date of Patent: September 4, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Seong Jae Lim, Jeung Chul Park, Chang Woo Chu, Ho Won Kim, Ji Young Park, Bon Ki Koo
-
Patent number: 8238648Abstract: Provided is a camera self-calibration method that calculates a focal length of a fixed zoom lens camera from a correspondence point position between images. In the method, a cost function, which is a function of a focal length, is defined, and a focal length that minimizes the defined cost function is obtained to obtain a focal length that allows 3D recovery results of correspondence points calculated from all image pairs coincide with one another. Therefore, reliability of the calculated focal length can be easily verified, and the focal length of the camera can be stably calculated even when the position of input correspondence point is inaccurately given.Type: GrantFiled: November 27, 2007Date of Patent: August 7, 2012Assignee: Electronics and Telecommunications Research InstituteInventors: Jae Chul Kim, Chang Woo Chu, Ho Won Kim, Jeung Chul Park, Ji Young Park, Seong Jae Lim, Bon Ki Koo
-
Publication number: 20120154393Abstract: Disclosed herein are an apparatus and method for creating animation by capturing the motions of a non-rigid object. The apparatus includes a geometry mesh reconstruction unit, a motion capture unit, and a content creation unit. The geometry mesh reconstruction unit receives moving images captured by a plurality of cameras, and generates a reconstruction mesh set for each frame. The motion capture unit generates mesh graph sets for the reconstruction mesh set and generates motion data, including motion information, using the mesh graph sets. The content creation unit creates three-dimensional (3D) content for a non-rigid object by generating a final transformation mesh set, having a topology similar to that of the reconstruction mesh set, using the motion data.Type: ApplicationFiled: December 21, 2011Publication date: June 21, 2012Applicant: Electronics and Telecommunications Research InstituteInventors: Ji-Hyung LEE, Bon-Ki Koo, Yoon-Seok Choi, Jeung-Chul Park, Do-Hyung Kim, Bon-Woo Hwang, Kap-Kee Kim, Seong-Jae Lim, Han-Byul Joo, Seung-Uk Yoon
-
Publication number: 20110148874Abstract: Disclosed herein is a method and system for transforming the muscles of a character model. The muscles of a target model are created using the muscle information of a reference model. The system includes a reference model processor and a target model processor. The reference model processor creates a reference feature volume, that is, a 3D geometric shape, based on the skeleton and appearance information of the reference model, and subordinates the muscle information of the reference model to the feature volume. The target model processor deforms the reference feature volume to be suitable for the target model, and applies the muscle information of the reference model to the target model, thereby creating muscles for the target model based on the extent of the deformation of the reference feature volume.Type: ApplicationFiled: May 19, 2010Publication date: June 23, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Young Mi CHA, II Kyu Park, Jeung Chul Park, Ji Hyung Lee, Bon Ki Koo
-
Publication number: 20110148864Abstract: Disclosed herein is a method and apparatus for creating a 3D avatar. The method of creating a three dimensional (3D) avatar includes receiving body information of a user and storing the body information in a DataBase (DB), and creating a 3D avatar for the user by modifying standard data, predetermined based on body information about various persons and stored in the DB, based on the body information of the user.Type: ApplicationFiled: December 10, 2010Publication date: June 23, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Ji-Hyung LEE, Yoon-Seok Choi, Do-Hyung Kim, Il-Kyu Park, Young-Mi Cha, Jeung-Chul Park, Bon-Ki Koo
-
Publication number: 20110142435Abstract: Provided are an apparatus and method for estimating reflectance and diffuse elements as optical properties of skin to perform exact rendering on human skin. The apparatus includes a light source device equipping a plurality of light sources that control directions toward an object, a control unit controlling sequential switching of the light sources, and a photographing unit photographing images of the object. The photographing unit is a DSLR camera providing a video photographing function. The entire light sources are controlled to be sequentially switched for a second correspondingly to the number of frames of the photographing unit per second. The control unit and the photographing unit are controlled by a computer. The computer repeats operations of requesting to the control unit to perform a lighting operation, transmitting an image acquisition command to the photographing unit, and requesting the control unit to perform a lighting operation again.Type: ApplicationFiled: June 4, 2010Publication date: June 16, 2011Applicant: Electronics and Telecommunications Research InstituteInventors: Jeung Chul PARK, Bon Ki KOO