Patents by Inventor Gwang Soon Lee
Gwang Soon Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210099687Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.Type: ApplicationFiled: September 25, 2020Publication date: April 1, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
-
Publication number: 20210092346Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.Type: ApplicationFiled: September 23, 2020Publication date: March 25, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Jun Young JEONG
-
Publication number: 20210067757Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
-
Publication number: 20210006830Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of source view videos into base view videos and additional view videos, generating residual data for the additional view videos, packing a patch, which is generated based on the residual data, into an altas video, and generating metadata for the patch.Type: ApplicationFiled: March 19, 2020Publication date: January 7, 2021Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Sang Woon KWAK
-
Publication number: 20210006764Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.Type: ApplicationFiled: July 6, 2020Publication date: January 7, 2021Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Jong Il PARK, Jun Young YUN
-
Publication number: 20210006831Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.Type: ApplicationFiled: March 19, 2020Publication date: January 7, 2021Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Poznan University of TechnologyInventors: Gwang Soon LEE, Jun Young JEONG, Hong Chang SHIN, Kug Jin YUN, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
-
Publication number: 20200410746Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.Type: ApplicationFiled: June 26, 2020Publication date: December 31, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Hong-Chang SHIN, Gwang Soon LEE, Jun Young JEONG
-
Publication number: 20200413094Abstract: Disclosed herein are an image encoding/decoding method and apparatus and a recording medium storing a bitstream.Type: ApplicationFiled: June 12, 2020Publication date: December 31, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
-
Publication number: 20200396485Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.Type: ApplicationFiled: June 15, 2020Publication date: December 17, 2020Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN, Jun Young YUN, Jong Il PARK
-
Publication number: 20200359000Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.Type: ApplicationFiled: March 20, 2020Publication date: November 12, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hong Chang SHIN, Gwang Soon LEE, Sang Woon KWAK, Kug Jin YUN, Jun Young JEONG
-
Publication number: 20200336724Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.Type: ApplicationFiled: January 31, 2020Publication date: October 22, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
-
Patent number: 10701396Abstract: The present invention relates to a video encoding/decoding method and apparatus, and more particularly, to a method and apparatus for generating a reference image for a multiview video. The video encoding method includes, in the presence of a second image having a different view from a first image having a first view, transforming the second image to have the first view, generating a reference image by adding the second image to a side of the first image, and storing the reference image in a reference picture list.Type: GrantFiled: November 23, 2016Date of Patent: June 30, 2020Assignees: Electronics and Telecommunications Research Institute, UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITYInventors: Gun Bang, Woo Woen Gwun, Gwang Soon Lee, Nam Ho Hur, Gwang Hoon Park, Sung Jae Yoon, Young Su Heo, Seok Jong Hong
-
Patent number: 10681378Abstract: A method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of: configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block; configuring an extended merge motion information list by using motion information of a depth information map and a video view different from the current block; and determining whether neighboring block motion information contained in the base merge motion candidate list is derived through view synthesis prediction.Type: GrantFiled: August 8, 2018Date of Patent: June 9, 2020Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee UniversityInventors: Gun Bang, Gwang Soon Lee, Nam Ho Hur, Kyung Yong Kim, Young Su Heo, Gwang Hoon Park, Yoon Jin Lee
-
Publication number: 20190332882Abstract: Disclosed is an apparatus and method of correcting 3D image distortion. A method of correcting 3D image distortion according to the present disclosure includes: receiving an input image that contains a predetermined first pattern; extracting a characteristic value related to the first pattern from the input image; and updating the input image on the basis of the extracted characteristic value.Type: ApplicationFiled: April 15, 2019Publication date: October 31, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Joon Soo KIM, Gwang Soon LEE
-
Patent number: 10412403Abstract: Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.Type: GrantFiled: August 14, 2018Date of Patent: September 10, 2019Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee UniversityInventors: Gun Bang, Gwang Soon Lee, Nam Ho Hur, Gwang Hoon Park, Young Su Heo, Kyung Yong Kim, Yoon Jin Lee
-
Patent number: 10194133Abstract: The present invention provides a three-dimensional image decoding method comprising the steps of: inserting a first candidate block into a merge candidate list; when view synthesis prediction (VSP) has been used in the first candidate block, generating information indicating that the VSP has been used; and when information indicating that the VSP has been used exists, refraining from inserting the VSP candidate of the current block into the merge candidate list.Type: GrantFiled: May 29, 2015Date of Patent: January 29, 2019Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group Of Kyung Hee UniversityInventors: Gun Bang, Gwang Soon Lee, Gwang Hoon Park, Min Seong Lee, Nam Ho Hur, Young Su Heo
-
Publication number: 20180359487Abstract: The present invention relates to a video encoding/decoding method and apparatus, and more particularly, to a method and apparatus for generating a reference image for a multiview video. The video encoding method includes, in the presence of a second image having a different view from a first image having a first view, transforming the second image to have the first view, generating a reference image by adding the second image to a side of the first image, and storing the reference image in a reference picture list.Type: ApplicationFiled: November 23, 2016Publication date: December 13, 2018Applicants: Electronics and Telecommunications Research Institute, University- Industry Cooperation Group of Kyung Hee UniversityInventors: Gun BANG, Woo Woen GWUN, Gwang Soon LEE, Nam Ho HUR, Gwang Hoon PARK, Sung Jae YOON, Young Su HEO, Seok Jong HONG
-
Publication number: 20180359481Abstract: Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.Type: ApplicationFiled: August 14, 2018Publication date: December 13, 2018Applicants: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee UniversityInventors: Gun BANG, Gwang Soon LEE, Nam Ho HUR, Gwang Hoon PARK, Young Su HEO, Kyung Yong KIM, Yoon Jin LEE
-
Publication number: 20180352256Abstract: A method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of: configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block; configuring an extended merge motion information list by using motion information of a depth information map and a video view different from the current block; and determining whether neighboring block motion information contained in the base merge motion candidate list is derived through view synthesis prediction.Type: ApplicationFiled: August 8, 2018Publication date: December 6, 2018Applicants: Electronics and Telecommunications Research Institute, University-Industry Cooperation Foundation of Kyung Hee UniversityInventors: Gun BANG, Gwang Soon LEE, Nam Ho HUR, Kyung Yong KIM, Young Su HEO, Gwang Hoon PARK, Yoon Jin LEE
-
Patent number: 10080029Abstract: Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.Type: GrantFiled: April 22, 2014Date of Patent: September 18, 2018Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, University-Industry Cooperation Group of Kyung Hee UniversityInventors: Gun Bang, Gwang Soon Lee, Nam Ho Hur, Gwang Hoon Park, Young Su Heo, Kyung Yong Kim, Yoon Jin Lee