Patents by Inventor Hong-Chang SHIN

Hong-Chang SHIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210218995
    Abstract: A video encoding/decoding method and apparatus is provided. The image decoding method includes acquiring image data of images of a plurality of views, determining a basic view and a plurality of reference views among the plurality of views, determining a pruning order of the plurality of reference views, and parsing the image data based on the pruning order and decoding an image of the basic view and images of the plurality of reference views.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 15, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Ho Min EUM, Gwang Soon LEE, Jin Hwan LEE, Jun Young JEONG, Kug Jin YUN, Jong Il PARK, Jun Young YUN
  • Patent number: 11064218
    Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: July 13, 2021
    Assignees: Electronics and Telecommunications Research Institute, Poznan University of Technology
    Inventors: Gwang Soon Lee, Jun Young Jeong, Hong Chang Shin, Kug Jin Yun, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
  • Patent number: 11037362
    Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: June 15, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang Shin, Gwang Soon Lee, Jun Young Jeong
  • Publication number: 20210099687
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 1, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210092346
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Jun Young JEONG
  • Publication number: 20210067757
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210006830
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of source view videos into base view videos and additional view videos, generating residual data for the additional view videos, packing a patch, which is generated based on the residual data, into an altas video, and generating metadata for the patch.
    Type: Application
    Filed: March 19, 2020
    Publication date: January 7, 2021
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Sang Woon KWAK
  • Publication number: 20210006764
    Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 7, 2021
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Jong Il PARK, Jun Young YUN
  • Publication number: 20210006831
    Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.
    Type: Application
    Filed: March 19, 2020
    Publication date: January 7, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Poznan University of Technology
    Inventors: Gwang Soon LEE, Jun Young JEONG, Hong Chang SHIN, Kug Jin YUN, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
  • Publication number: 20200410746
    Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.
    Type: Application
    Filed: June 26, 2020
    Publication date: December 31, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang SHIN, Gwang Soon LEE, Jun Young JEONG
  • Publication number: 20200413094
    Abstract: Disclosed herein are an image encoding/decoding method and apparatus and a recording medium storing a bitstream.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 31, 2020
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
  • Publication number: 20200396485
    Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.
    Type: Application
    Filed: June 15, 2020
    Publication date: December 17, 2020
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN, Jun Young YUN, Jong Il PARK
  • Publication number: 20200359000
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
    Type: Application
    Filed: March 20, 2020
    Publication date: November 12, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Sang Woon KWAK, Kug Jin YUN, Jun Young JEONG
  • Publication number: 20200336724
    Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.
    Type: Application
    Filed: January 31, 2020
    Publication date: October 22, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
  • Patent number: 9661314
    Abstract: Provided are an image magnification apparatus and method for three-dimensional display images. A multi-view image magnification apparatus includes: a region designation detection unit for detecting detection regions corresponding to the magnification regions of three-dimensional display images from original view images; a magnification ratio determination unit for determining the magnification ratios of the magnification regions; and a partial multiplexing unit for multiplexing non-detection regions except for the detection regions of the original view images according to a predetermined resolution and multiplexing the detection regions according the resolution which is different from the resolution of the non-detection regions on the basis of the magnification ratios. The present invention can magnify the images without lowering the resolution in a three-dimensional display.
    Type: Grant
    Filed: July 2, 2012
    Date of Patent: May 23, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang Shin, Gun Bang, Gi Mun Um, Tae One Kim, Eun Young Chang, Won Sik Cheong, Nam Ho Hur
  • Patent number: 9652128
    Abstract: There are provided a method and apparatus for controlling an electronic device. The apparatus for controlling an electronic device includes a marker recognition unit configured to recognize the marker of the electronic device and a control unit configured to perform communication with the electronic device based on the marker and control the electronic device using a Graphic User Interface (GUI) program received from the electronic device.
    Type: Grant
    Filed: December 19, 2013
    Date of Patent: May 16, 2017
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong Chang Shin, Gi Mun Um, Chan Kim, Hyun Lee, Eung Don Lee, Won Sik Cheong, Nam Ho Hur
  • Patent number: 9143757
    Abstract: A method of receiving stereoscopic video according to the present invention includes receiving a bit stream including image information, extracting a base image stream corresponding to a base image and an additional image stream corresponding to an additional image from the bit stream, generating the base image and the additional image by decoding the base image stream and the additional image stream, respectively, and generating a left image and a right image by using at least one of the base image and the additional image. According to the present invention, 2D/3D broadcasting service efficiency may be improved.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: September 22, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon Lee, Chan Kim, Kug Jin Yun, Hong Chang Shin, Won Sik Cheong, Nam Ho Hur
  • Patent number: 9124863
    Abstract: A device for creating multi-view video contents includes a virtual view position and distribution unit that calculates a plurality of virtual views corresponding to the number of predetermined virtual views based on input information and distributes the calculated positions of the plurality of virtual views to a plurality of view synthesis processing units; a view synthesis processor that operates the plurality of view synthesis processing units in parallel, allows each of the view synthesis processing units to create at least one virtual view video corresponding to a position of at least one virtual view distributed from the virtual view position calculation and distribution unit, and performs partial multiplexing based on at least one created virtual view video; and a video integration unit that integrates a plurality of partially multiplexed videos output from the plurality of view synthesis processing units.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: September 1, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang Shin, Gun Bang, Gi-Mun Um, Tae One Kim, Eun Young Chang, Nam Ho Hur, Soo In Lee
  • Publication number: 20140223349
    Abstract: There are provided a method and apparatus for controlling an electronic device. The apparatus for controlling an electronic device includes a marker recognition unit configured to recognize the marker of the electronic device and a control unit configured to perform communication with the electronic device based on the marker and control the electronic device using a Graphic User Interface (GUI) program received from the electronic device.
    Type: Application
    Filed: December 19, 2013
    Publication date: August 7, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong Chang SHIN, Gi Mun UM, Chan KIM, Hyun LEE, Eung Don LEE, Won Sik CHEONG, Nam Ho HUR
  • Patent number: 8731279
    Abstract: Method and device for generating a multi-viewpoint image are provided. The method of generating a multi-viewpoint image includes the steps of: acquiring at least one reference-viewpoint image; generating unit image information of a virtual-viewpoint image on the basis of unit image information of the reference-viewpoint image; multiplexing the unit image information of the reference-viewpoint image and the unit image information of the virtual-viewpoint image; and generating a multi-viewpoint image by performing an interpolation process on occluded areas between the multiplexed unit image information using the multiplexed unit image information. As a result, it is possible to avoid unnecessary processes of completing and rearranging individual viewpoint images in the course of generating a multi-viewpoint image.
    Type: Grant
    Filed: December 8, 2011
    Date of Patent: May 20, 2014
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong-Chang Shin, Gun Bang, Gi-Mun Um, Tae One Kim, Eun Young Chang, Nam Ho Hur, Soo In Lee