Patents by Inventor Hong-Chang SHIN

Hong-Chang SHIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230396803
    Abstract: Disclosed herein is a method for encoding/decoding an immersive image, and the method for encoding an immersive image may include extracting an invalid region from an already encoded atlas and encoding a current atlas by referring to the invalid region.
    Type: Application
    Filed: April 14, 2023
    Publication date: December 7, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kwan Jung OH, Gwang Soon LEE, Hong Chang SHIN, Jun Young JEONG
  • Patent number: 11838485
    Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: December 5, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon Lee, Jun Young Jeong, Kug Jin Yun, Hong Chang Shin, Ho Min Eum
  • Publication number: 20230386090
    Abstract: An image encoding method according to the present disclosure may include classifying a plurality of view images into a basic image and an additional image; performing pruning for at least one of the plurality of view images based on a result of the classification; generating an atlas based on a result of performing the pruning; and encoding the atlas and metadata for the atlas. In this case, the metadata may include spherical harmonic function information on a point in a three-dimensional space.
    Type: Application
    Filed: May 25, 2023
    Publication date: November 30, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Kwan Jung OH, Jun Young JEONG
  • Publication number: 20230336789
    Abstract: An immersive image encoding method according to the present disclosure includes classifying a plurality of view images into a basic image and an additional image; performing pruning for at least one of the plurality of view images based on the classification result; generating a depth atlas based on a result of performing the pruning; and correcting an occupancy state of pixels in the depth atlas.
    Type: Application
    Filed: April 18, 2023
    Publication date: October 19, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kwan Jung OH, Gwang Soon LEE, Hong Chang SHIN, Jun Young JEONG, Jeong Il SEO, Jae Gon KIM, Sung Gyun LIM, Hyeon Jong HWANG
  • Publication number: 20230319248
    Abstract: A method of switching an atlas according to a watching position according to the present disclosure includes acquiring information on a view image required to reproduce a viewport image; acquiring information of a first atlas mapped to the view image; and receiving a bitstream of the first atlas and a bitstream of a second atlas different from the first atlas. In this case, when it is determined that atlas switching is necessary, reception of any one of a bitstream of the first atlas and a bitstream of the second atlas may be stopped and a bitstream of a third atlas may be newly received.
    Type: Application
    Filed: March 29, 2023
    Publication date: October 5, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Sang Woon KWAK, Hong Chang SHIN
  • Publication number: 20230222694
    Abstract: A method of processing an immersive video according to the present disclosure includes performing pruning for an input image, generating an atlas based on patches generated by the pruning and generating a cropped atlas by removing a background region of the atlas.
    Type: Application
    Filed: January 12, 2023
    Publication date: July 13, 2023
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)
    Inventors: Kwan Jung OH, Gwang Soon LEE, Jeong Il SEO, Hong Chang SHIN, Jun Young JEONG, Euee Seon JANG, Tian Yu Dong, Xin Li, Jai Young OH
  • Publication number: 20230114021
    Abstract: Disclosed herein are an apparatus and method for removing redundant data between multi-view videos. The method includes generating a pruning mask of an additional view image by mapping a basic view image to the additional view image, among multi-view images, and revalidating the pruning mask using color information of the basic view image and the additional view image. Revalidating the pruning mask may include defining a color relationship between the basic view image and the additional view image by extracting predetermined sample values from corresponding pixels between the basic view image and the additional view image, which are included in the pruning candidate group of the pruning mask, and detecting pixels that do not match the defined color relationship, among the pixels in the pruning mask, as outliers.
    Type: Application
    Filed: October 7, 2022
    Publication date: April 13, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang SHIN, Gwang-Soon LEE
  • Patent number: 11616938
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: March 28, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun
  • Patent number: 11575935
    Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: February 7, 2023
    Assignees: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun, Jun Young Yun, Jong Il Park
  • Patent number: 11558625
    Abstract: Disclosed herein are a method and apparatus for generating a residual image of multi-view video. The method includes generating a pruning mask of an additional view image by mapping a basic view image to the additional view image, among multi-view images, and detecting outliers in the pruning mask using color information of the basic view image and the additional view image.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: January 17, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong-Chang Shin, Gwang-Soon Lee, Ho-Min Eum, Jun-Young Jeong
  • Publication number: 20230011027
    Abstract: Disclosed herein is a method for encoding an immersive image. The method includes detecting a non-diffuse surface in a first texture image of a first view, generating an additional texture image from the first texture image based on the detected non-diffuse surface, performing pruning on the additional texture image based on a second texture image of a second view, generating a texture atlas based on the pruned additional texture image, and encoding the texture atlas.
    Type: Application
    Filed: July 6, 2022
    Publication date: January 12, 2023
    Inventors: Gwang-Soon LEE, Hong-Chang SHIN, Jun-Young JEONG
  • Patent number: 11483534
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: October 25, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin Yun, Jun Young Jeong, Gwang Soon Lee, Hong Chang Shin, Ho Min Eum
  • Patent number: 11477429
    Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: October 18, 2022
    Assignees: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Jong Il Park, Jun Young Yun
  • Patent number: 11350074
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: May 31, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang Shin, Gwang Soon Lee, Sang Woon Kwak, Kug Jin Yun, Jun Young Jeong
  • Publication number: 20210409726
    Abstract: Disclosed herein are a method and apparatus for generating a residual image of multi-view video. The method includes generating a pruning mask of an additional view image by mapping a basic view image to the additional view image, among multi-view images, and detecting outliers in the pruning mask using color information of the basic view image and the additional view image.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 30, 2021
    Inventors: Hong-Chang SHIN, Gwang-Soon LEE, Ho-Min EUM, Jun-Young JEONG
  • Patent number: 11212505
    Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: December 28, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon Lee, Hong Chang Shin, Kug Jin Yun, Jun Young Jeong
  • Publication number: 20210383122
    Abstract: A method of processing an immersive video includes classifying view images into a basic image and an additional image, performing pruning with respect to view images by referring to a result of classification, generating atlases based on a result of pruning, generating a merged atlas by merging the atlases into one atlas, and generating configuration information of the merged atlas.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Jun Young JEONG, Kug Jin YUN, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210385490
    Abstract: A video decoding method comprises receiving a plurality of atlases and metadata, unpacking patches included in the plurality of atlases based on the plurality of atlases and the metadata, reconstructing view images including an image of a basic view and images of a plurality of additional views, by unpruning the patches based on the metadata, and synthesizing an image of a target playback view based on the view images. The metadata is data related to priorities of the view images.
    Type: Application
    Filed: April 15, 2021
    Publication date: December 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210329209
    Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 21, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Jun Young JEONG, Kug Jin YUN, Hong Chang SHIN, Ho Min EUM
  • Patent number: 11140377
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: October 5, 2021
    Assignee: Electronics and Telecommunications Research institute
    Inventors: Gwang Soon Lee, Hong Chang Shin, Ho Min Eum, Jun Young Jeong