Patents by Inventor Gwang Soon Lee

Gwang Soon Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11457199
    Abstract: An immersive video processing method according to the present invention includes: classifying view videos into a base video and an additional video; performing pruning for the view videos by referring to a result of the classification; generating an atlas based on a result of the pruning; determining a depth parameter of each view in the atlas; and encoding information indicating whether or not updating of the depth parameter is needed, based on whether or not the depth parameter of each view in the atlas is identical as in a previous atlas.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: September 27, 2022
    Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Poznan University of Technology
    Inventors: Gwang Soon Lee, Jun Young Jeong, Dawid Mieloch, Adrian Dziembowski, Marek Domanski
  • Publication number: 20220215566
    Abstract: Disclosed herein is a method for piecewise linear scaling of a geometry atlas, the method including generating min-max normalized depth values and generating geometry atlases by scaling the depth values so as to correspond to the gradients of multiple linear intervals.
    Type: Application
    Filed: January 6, 2022
    Publication date: July 7, 2022
    Inventors: Kwan-Jung OH, Gwang-Soon LEE, Do-Hyeon PARK, Jae-Gon KIM, Sung-Gyun LIM
  • Patent number: 11350074
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: May 31, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang Shin, Gwang Soon Lee, Sang Woon Kwak, Kug Jin Yun, Jun Young Jeong
  • Publication number: 20220122217
    Abstract: A method for processing an immersive video includes: performing pruning for view images; generating an atlas by packing a patch that is extracted as a result of the pruning; deriving an offset for the patch that is comprised in the atlas; and correcting pixel values in the patch by using the derived offset.
    Type: Application
    Filed: October 18, 2021
    Publication date: April 21, 2022
    Inventors: Gwang Soon LEE, Jun Young JEONG, Dawid Mieloch, Adrian Dziembowski, Marek Domanski
  • Publication number: 20220007000
    Abstract: An immersive video processing method according to the present invention includes: classifying view videos into a base video and an additional video; performing pruning for the view videos by referring to a result of the classification; generating an atlas based on a result of the pruning; determining a depth parameter of each view in the atlas; and encoding information indicating whether or not updating of the depth parameter is needed, based on whether or not the depth parameter of each view in the atlas is identical as in a previous atlas.
    Type: Application
    Filed: June 22, 2021
    Publication date: January 6, 2022
    Inventors: Gwang Soon LEE, Jun Young JEONG, Dawid MIELOCH, Adrian DZIEMBOWSKI, Marek DOMANSKI
  • Publication number: 20210409726
    Abstract: Disclosed herein are a method and apparatus for generating a residual image of multi-view video. The method includes generating a pruning mask of an additional view image by mapping a basic view image to the additional view image, among multi-view images, and detecting outliers in the pruning mask using color information of the basic view image and the additional view image.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 30, 2021
    Inventors: Hong-Chang SHIN, Gwang-Soon LEE, Ho-Min EUM, Jun-Young JEONG
  • Patent number: 11212505
    Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: December 28, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon Lee, Hong Chang Shin, Kug Jin Yun, Jun Young Jeong
  • Publication number: 20210383122
    Abstract: A method of processing an immersive video includes classifying view images into a basic image and an additional image, performing pruning with respect to view images by referring to a result of classification, generating atlases based on a result of pruning, generating a merged atlas by merging the atlases into one atlas, and generating configuration information of the merged atlas.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Jun Young JEONG, Kug Jin YUN, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210385490
    Abstract: A video decoding method comprises receiving a plurality of atlases and metadata, unpacking patches included in the plurality of atlases based on the plurality of atlases and the metadata, reconstructing view images including an image of a basic view and images of a plurality of additional views, by unpruning the patches based on the metadata, and synthesizing an image of a target playback view based on the view images. The metadata is data related to priorities of the view images.
    Type: Application
    Filed: April 15, 2021
    Publication date: December 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210349532
    Abstract: Disclosed herein a device tracking gaze and method therefor. The device includes: an image acquisition unit configured to obtain an eyeball image; a pupil detection unit configured to detect a center of pupil by using the eyeball image; a virtual corneal reflection light position generator configured to process the eyeball image so that a virtual corneal reflection light is located at a predetermined point in the eyeball image; and a PCVR vector generator configured to generate a pupil center virtual reflection vector (PCVR vector) based on a position of the pupil center and a position of the virtual corneal reflection light.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 11, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hyun Cheol KIM, Joon Soo KIM, Gwang Soon LEE
  • Publication number: 20210329209
    Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 21, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Jun Young JEONG, Kug Jin YUN, Hong Chang SHIN, Ho Min EUM
  • Patent number: 11140377
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: October 5, 2021
    Assignee: Electronics and Telecommunications Research institute
    Inventors: Gwang Soon Lee, Hong Chang Shin, Ho Min Eum, Jun Young Jeong
  • Publication number: 20210218995
    Abstract: A video encoding/decoding method and apparatus is provided. The image decoding method includes acquiring image data of images of a plurality of views, determining a basic view and a plurality of reference views among the plurality of views, determining a pruning order of the plurality of reference views, and parsing the image data based on the pruning order and decoding an image of the basic view and images of the plurality of reference views.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 15, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Ho Min EUM, Gwang Soon LEE, Jin Hwan LEE, Jun Young JEONG, Kug Jin YUN, Jong Il PARK, Jun Young YUN
  • Patent number: 11064218
    Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: July 13, 2021
    Assignees: Electronics and Telecommunications Research Institute, Poznan University of Technology
    Inventors: Gwang Soon Lee, Jun Young Jeong, Hong Chang Shin, Kug Jin Yun, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
  • Patent number: 11037362
    Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: June 15, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang Shin, Gwang Soon Lee, Jun Young Jeong
  • Patent number: 10997455
    Abstract: Disclosed is an apparatus and method of correcting 3D image distortion. A method of correcting 3D image distortion according to the present disclosure includes: receiving an input image that contains a predetermined first pattern; extracting a characteristic value related to the first pattern from the input image; and updating the input image on the basis of the extracted characteristic value.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: May 4, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Joon Soo Kim, Gwang Soon Lee
  • Publication number: 20210099687
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 1, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210092346
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Jun Young JEONG
  • Publication number: 20210067757
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210006831
    Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.
    Type: Application
    Filed: March 19, 2020
    Publication date: January 7, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Poznan University of Technology
    Inventors: Gwang Soon LEE, Jun Young JEONG, Hong Chang SHIN, Kug Jin YUN, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski