Patents by Inventor Ho-Min Eum

Ho-Min Eum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11575935
    Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: February 7, 2023
    Assignees: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun, Jun Young Yun, Jong Il Park
  • Patent number: 11558625
    Abstract: Disclosed herein are a method and apparatus for generating a residual image of multi-view video. The method includes generating a pruning mask of an additional view image by mapping a basic view image to the additional view image, among multi-view images, and detecting outliers in the pruning mask using color information of the basic view image and the additional view image.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: January 17, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hong-Chang Shin, Gwang-Soon Lee, Ho-Min Eum, Jun-Young Jeong
  • Patent number: 11483534
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: October 25, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin Yun, Jun Young Jeong, Gwang Soon Lee, Hong Chang Shin, Ho Min Eum
  • Patent number: 11477429
    Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: October 18, 2022
    Assignees: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Jong Il Park, Jun Young Yun
  • Publication number: 20210409726
    Abstract: Disclosed herein are a method and apparatus for generating a residual image of multi-view video. The method includes generating a pruning mask of an additional view image by mapping a basic view image to the additional view image, among multi-view images, and detecting outliers in the pruning mask using color information of the basic view image and the additional view image.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 30, 2021
    Inventors: Hong-Chang SHIN, Gwang-Soon LEE, Ho-Min EUM, Jun-Young JEONG
  • Publication number: 20210385490
    Abstract: A video decoding method comprises receiving a plurality of atlases and metadata, unpacking patches included in the plurality of atlases based on the plurality of atlases and the metadata, reconstructing view images including an image of a basic view and images of a plurality of additional views, by unpruning the patches based on the metadata, and synthesizing an image of a target playback view based on the view images. The metadata is data related to priorities of the view images.
    Type: Application
    Filed: April 15, 2021
    Publication date: December 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210383122
    Abstract: A method of processing an immersive video includes classifying view images into a basic image and an additional image, performing pruning with respect to view images by referring to a result of classification, generating atlases based on a result of pruning, generating a merged atlas by merging the atlases into one atlas, and generating configuration information of the merged atlas.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Jun Young JEONG, Kug Jin YUN, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210329209
    Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 21, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Jun Young JEONG, Kug Jin YUN, Hong Chang SHIN, Ho Min EUM
  • Patent number: 11140377
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: October 5, 2021
    Assignee: Electronics and Telecommunications Research institute
    Inventors: Gwang Soon Lee, Hong Chang Shin, Ho Min Eum, Jun Young Jeong
  • Publication number: 20210218995
    Abstract: A video encoding/decoding method and apparatus is provided. The image decoding method includes acquiring image data of images of a plurality of views, determining a basic view and a plurality of reference views among the plurality of views, determining a pruning order of the plurality of reference views, and parsing the image data based on the pruning order and decoding an image of the basic view and images of the plurality of reference views.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 15, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Ho Min EUM, Gwang Soon LEE, Jin Hwan LEE, Jun Young JEONG, Kug Jin YUN, Jong Il PARK, Jun Young YUN
  • Publication number: 20210099687
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 1, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210092346
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Jun Young JEONG
  • Publication number: 20210067757
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210006764
    Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.
    Type: Application
    Filed: July 6, 2020
    Publication date: January 7, 2021
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Jong Il PARK, Jun Young YUN
  • Publication number: 20210006830
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of source view videos into base view videos and additional view videos, generating residual data for the additional view videos, packing a patch, which is generated based on the residual data, into an altas video, and generating metadata for the patch.
    Type: Application
    Filed: March 19, 2020
    Publication date: January 7, 2021
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Sang Woon KWAK
  • Publication number: 20200396485
    Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.
    Type: Application
    Filed: June 15, 2020
    Publication date: December 17, 2020
    Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN, Jun Young YUN, Jong Il PARK
  • Publication number: 20180052309
    Abstract: Disclosed herein are a method for expanding a field of view of a head-mounted display device and an apparatus using the method. The method for expanding a field of view of a head-mounted display device projects a virtual image onto an eye of a user who wears the head-mounted display device. The method includes displaying an image using a curved display and a curved optical lens, and enlarging a virtual image corresponding to the image and projecting an enlarged virtual image at a location farther away from the eye of the user than the curved display using the curved optical lens, wherein the curved optical lens is located closer to the eye of the user than the curved display.
    Type: Application
    Filed: August 9, 2017
    Publication date: February 22, 2018
    Inventors: Gwang-Soon LEE, Ho-Min EUM, Eung-Don LEE
  • Publication number: 20170302912
    Abstract: A method and device for increasing a resolution for each viewpoint in a glassless three-dimensional (3D) display. A method of controlling a three-dimensional (3D) display device including a display panel and a lens includes performing time-division on a plurality of viewpoint images such that each of the viewpoint images is divided into n division images, transferring the division images to the display panel by increasing n times a frame rate of the division images, and controlling the lens such that the division images transferred to the display panel pass through a lens cell.
    Type: Application
    Filed: April 14, 2017
    Publication date: October 19, 2017
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ho Min EUM, Gwang Soon LEE
  • Patent number: 9794027
    Abstract: A method and apparatus for generating frames to apply error correction to data including a plurality of consecutive data groups are provided. Upon receiving input of an n-th data group consisting of a plurality of priority groups with different priority levels, the number of first code rate frames, which is the number of frames in the n-th data group for which a first code rate is used, is calculated based on the number of first code rate bits calculated based on the ratio of the length of data in an (n?1)-th data group for which the first code rate is used. The number of second code rate frames, which is the number of frames in the n-th data group for which the second code rate is used, is calculated based on the number of second code rate bits calculated based on the number of first code rate bits. Frames for error correction are generated based on the number of first code rate frames and the number of second code rate frames.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: October 17, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Ho Min Eum
  • Patent number: 9686042
    Abstract: Disclosed is a method of transmitting, by a transmission device, broadcast signals in a frequency-shared terrestrial broadcast system. The method includes generating the broadcast signals including pilot signals arranged at a physical layer frame based on a group identification (ID) defined according to a broadcast service, and transmitting the generated broadcast signals to a reception device, wherein the positions of the pilot signals arranged at the physical layer frame are different by group IDs.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: June 20, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sung Ik Park, Sun Hyoung Kwon, Heung Mook Kim, Nam Ho Hur, Jeong Chang Kim, Jae Hyun Seo, Ho Min Eum