Patents by Inventor Gwang Soon Lee

Gwang Soon Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210099687
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 1, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210092346
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Jun Young JEONG
  • Publication number: 20210067757
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210006830
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of source view videos into base view videos and additional view videos, generating residual data for the additional view videos, packing a patch, which is generated based on the residual data, into an altas video, and generating metadata for the patch.
    Type: Application
    Filed: March 19, 2020
    Publication date: January 7, 2021
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Sang Woon KWAK
  • Publication number: 20210006764
    Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 7, 2021
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Jong Il PARK, Jun Young YUN
  • Publication number: 20210006831
    Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.
    Type: Application
    Filed: March 19, 2020
    Publication date: January 7, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Poznan University of Technology
    Inventors: Gwang Soon LEE, Jun Young JEONG, Hong Chang SHIN, Kug Jin YUN, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
  • Publication number: 20200410746
    Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 31, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang SHIN, Gwang Soon LEE, Jun Young JEONG
  • Publication number: 20200413094
    Abstract: Disclosed herein are an image encoding/decoding method and apparatus and a recording medium storing a bitstream.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 31, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
  • Publication number: 20200396485
    Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.
    Type: Application
    Filed: June 15, 2020
    Publication date: December 17, 2020
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN, Jun Young YUN, Jong Il PARK
  • Publication number: 20200359000
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
    Type: Application
    Filed: March 20, 2020
    Publication date: November 12, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Sang Woon KWAK, Kug Jin YUN, Jun Young JEONG
  • Publication number: 20200336724
    Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.
    Type: Application
    Filed: January 31, 2020
    Publication date: October 22, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
  • Patent number: 10701396
    Abstract: The present invention relates to a video encoding/decoding method and apparatus, and more particularly, to a method and apparatus for generating a reference image for a multiview video. The video encoding method includes, in the presence of a second image having a different view from a first image having a first view, transforming the second image to have the first view, generating a reference image by adding the second image to a side of the first image, and storing the reference image in a reference picture list.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: June 30, 2020
    Assignees: Electronics and Telecommunications Research Institute, UNIVERSITY-INDUSTRY COOPERATION GROUP OF KYUNG HEE UNIVERSITY
    Inventors: Gun Bang, Woo Woen Gwun, Gwang Soon Lee, Nam Ho Hur, Gwang Hoon Park, Sung Jae Yoon, Young Su Heo, Seok Jong Hong
  • Patent number: 10681378
    Abstract: A method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of: configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block; configuring an extended merge motion information list by using motion information of a depth information map and a video view different from the current block; and determining whether neighboring block motion information contained in the base merge motion candidate list is derived through view synthesis prediction.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: June 9, 2020
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Gun Bang, Gwang Soon Lee, Nam Ho Hur, Kyung Yong Kim, Young Su Heo, Gwang Hoon Park, Yoon Jin Lee
  • Publication number: 20190332882
    Abstract: Disclosed is an apparatus and method of correcting 3D image distortion. A method of correcting 3D image distortion according to the present disclosure includes: receiving an input image that contains a predetermined first pattern; extracting a characteristic value related to the first pattern from the input image; and updating the input image on the basis of the extracted characteristic value.
    Type: Application
    Filed: April 15, 2019
    Publication date: October 31, 2019
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Joon Soo KIM, Gwang Soon LEE
  • Patent number: 10412403
    Abstract: Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: September 10, 2019
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Gun Bang, Gwang Soon Lee, Nam Ho Hur, Gwang Hoon Park, Young Su Heo, Kyung Yong Kim, Yoon Jin Lee
  • Patent number: 10194133
    Abstract: The present invention provides a three-dimensional image decoding method comprising the steps of: inserting a first candidate block into a merge candidate list; when view synthesis prediction (VSP) has been used in the first candidate block, generating information indicating that the VSP has been used; and when information indicating that the VSP has been used exists, refraining from inserting the VSP candidate of the current block into the merge candidate list.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: January 29, 2019
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group Of Kyung Hee University
    Inventors: Gun Bang, Gwang Soon Lee, Gwang Hoon Park, Min Seong Lee, Nam Ho Hur, Young Su Heo
  • Publication number: 20180359487
    Abstract: The present invention relates to a video encoding/decoding method and apparatus, and more particularly, to a method and apparatus for generating a reference image for a multiview video. The video encoding method includes, in the presence of a second image having a different view from a first image having a first view, transforming the second image to have the first view, generating a reference image by adding the second image to a side of the first image, and storing the reference image in a reference picture list.
    Type: Application
    Filed: November 23, 2016
    Publication date: December 13, 2018
    Applicants: Electronics and Telecommunications Research Institute, University- Industry Cooperation Group of Kyung Hee University
    Inventors: Gun BANG, Woo Woen GWUN, Gwang Soon LEE, Nam Ho HUR, Gwang Hoon PARK, Sung Jae YOON, Young Su HEO, Seok Jong HONG
  • Publication number: 20180359481
    Abstract: Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.
    Type: Application
    Filed: August 14, 2018
    Publication date: December 13, 2018
    Applicants: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Gun BANG, Gwang Soon LEE, Nam Ho HUR, Gwang Hoon PARK, Young Su HEO, Kyung Yong KIM, Yoon Jin LEE
  • Publication number: 20180352256
    Abstract: A method for decoding a video including a plurality of views, according to one embodiment of the present invention, comprises the steps of: configuring a base merge motion candidate list by using motion information of neighboring blocks and a time correspondence block of a current block; configuring an extended merge motion information list by using motion information of a depth information map and a video view different from the current block; and determining whether neighboring block motion information contained in the base merge motion candidate list is derived through view synthesis prediction.
    Type: Application
    Filed: August 8, 2018
    Publication date: December 6, 2018
    Applicants: Electronics and Telecommunications Research Institute, University-Industry Cooperation Foundation of Kyung Hee University
    Inventors: Gun BANG, Gwang Soon LEE, Nam Ho HUR, Kyung Yong KIM, Young Su HEO, Gwang Hoon PARK, Yoon Jin LEE
  • Patent number: 10080029
    Abstract: Disclosed are a video encoding/decoding method and apparatus including a plurality of views. The video decoding method including the plurality of views comprises the steps of: inducing basic combination motion candidates for a current Prediction Unit (PU) to configure a combination motion candidate list; inducing expanded combination motion candidates for the current PU when the current PU corresponds to a depth information map or a dependent view; and adding the expanded combination motion candidates to the combination motion candidate list.
    Type: Grant
    Filed: April 22, 2014
    Date of Patent: September 18, 2018
    Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Gun Bang, Gwang Soon Lee, Nam Ho Hur, Gwang Hoon Park, Young Su Heo, Kyung Yong Kim, Yoon Jin Lee