Patents by Inventor Jun-Young Jeong
Jun-Young Jeong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11044456Abstract: Disclosed are a method and a player for outputting an overlay that displays additional information on a 360-degree video. According to the present invention, an image processing method includes: decoding an overlay; and rendering the decoded overlay on a 360-degree video on the basis of overlay-related information. Here, the overlay-related information includes information indicating the number of the overlays and information indicating a unique identifier assigned to the overlay, and when multiple overlays are present, the identifiers assigned to the respective multiple overlays differ.Type: GrantFiled: May 31, 2019Date of Patent: June 22, 2021Assignee: Electronics and Telecommunications Research InstituteInventors: Jun Young Jeong, Kug Jin Yun
-
Patent number: 11037362Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.Type: GrantFiled: June 26, 2020Date of Patent: June 15, 2021Assignee: Electronics and Telecommunications Research InstituteInventors: Hong-Chang Shin, Gwang Soon Lee, Jun Young Jeong
-
Publication number: 20210099687Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.Type: ApplicationFiled: September 25, 2020Publication date: April 1, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
-
Publication number: 20210092346Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include first threshold information that becomes a criterion for distinguishing between a valid pixel and an invalid pixel in the atlas video.Type: ApplicationFiled: September 23, 2020Publication date: March 25, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Jun Young JEONG
-
Publication number: 20210067757Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
-
Publication number: 20210006830Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of source view videos into base view videos and additional view videos, generating residual data for the additional view videos, packing a patch, which is generated based on the residual data, into an altas video, and generating metadata for the patch.Type: ApplicationFiled: March 19, 2020Publication date: January 7, 2021Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kug Jin YUN, Jun Young JEONG, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM, Sang Woon KWAK
-
Publication number: 20210006831Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.Type: ApplicationFiled: March 19, 2020Publication date: January 7, 2021Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, Poznan University of TechnologyInventors: Gwang Soon LEE, Jun Young JEONG, Hong Chang SHIN, Kug Jin YUN, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski
-
Publication number: 20210006764Abstract: An immersive video processing method according to the present disclosure includes determining a priority order of pruning for source view videos, generating a residual video for an additional view video based on the priority order of pruning, packing a patch generated based on the residual video into an atlas video, and encoding the atlas video.Type: ApplicationFiled: July 6, 2020Publication date: January 7, 2021Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Jong Il PARK, Jun Young YUN
-
Publication number: 20200413094Abstract: Disclosed herein are an image encoding/decoding method and apparatus and a recording medium storing a bitstream.Type: ApplicationFiled: June 12, 2020Publication date: December 31, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
-
Publication number: 20200410746Abstract: A method and an apparatus for generating a three-dimension (3D) virtual viewpoint image including: segmenting a first image into a plurality of images indicating different layers based on depth information of the first image at a gaze point of a user; and inpainting an area occluded by foreground in the plurality of images based on depth information of a reference viewpoint image are provided.Type: ApplicationFiled: June 26, 2020Publication date: December 31, 2020Applicant: Electronics and Telecommunications Research InstituteInventors: Hong-Chang SHIN, Gwang Soon LEE, Jun Young JEONG
-
Publication number: 20200396485Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.Type: ApplicationFiled: June 15, 2020Publication date: December 17, 2020Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN, Jun Young YUN, Jong Il PARK
-
Patent number: 10848837Abstract: Disclosed is an apparatus and method of providing a high quality 360-degree VR image. A method of decoding a 360-degree VR image according to the present disclosure includes: receiving a bit stream including 360-degree VR image information; decoding information related to a 360-degree VR service from the bitstream; detecting a region of interest based on the information related to the 360-degree VR service; and providing to a user a 360-degree VR image for the region of interest.Type: GrantFiled: April 3, 2018Date of Patent: November 24, 2020Assignee: Electronics and Telecommunications Research InstituteInventors: Kug Jin Yun, Jun Young Jeong
-
Publication number: 20200359000Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.Type: ApplicationFiled: March 20, 2020Publication date: November 12, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hong Chang SHIN, Gwang Soon LEE, Sang Woon KWAK, Kug Jin YUN, Jun Young JEONG
-
Publication number: 20200336724Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.Type: ApplicationFiled: January 31, 2020Publication date: October 22, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Gwang Soon LEE, Hong Chang SHIN, Kug Jin YUN, Jun Young JEONG
-
Publication number: 20200084516Abstract: Disclosed is an apparatus and method of providing a high quality 360-degree VR image. A method of decoding a 360-degree VR image according to the present disclosure includes: receiving a bit stream including 360-degree VR image information; decoding information related to a 360-degree VR service from the bitstream; detecting a region of interest based on the information related to the 360-degree VR service; and providing to a user a 360-degree VR image for the region of interest.Type: ApplicationFiled: April 3, 2018Publication date: March 12, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kug Jin YUN, Jun Young JEONG
-
Publication number: 20190373243Abstract: Disclosed are a method and a player for outputting an overlay that displays additional information on a 360-degree video. According to the present invention, an image processing method includes: decoding an overlay; and rendering the decoded overlay on a 360-degree video on the basis of overlay-related information. Here, the overlay-related information includes information indicating the number of the overlays and information indicating a unique identifier assigned to the overlay, and when multiple overlays are present, the identifiers assigned to the respective multiple overlays differ.Type: ApplicationFiled: May 31, 2019Publication date: December 5, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Jun Young JEONG, Kug Jin YUN
-
Publication number: 20160063748Abstract: An operating method of an electronic device is provided. The method includes determining whether a present point in time corresponds to a predetermined period or event, obtaining a content associated with the predetermined period or event, and providing the content as at least a part of a user interface displayed on a user interface screen of the electronic device.Type: ApplicationFiled: September 1, 2015Publication date: March 3, 2016Inventors: Ji-Hyun KIM, Wook-Hyun Jeong, Hye-Jin Kang, Jee-Youn Kim, Hyun-Seok Kim, Seok-Hee Na, Ha-Yang Seo, Sun-Mi You, Bo-Na Lee, Dong-Hoe Lim, Yoon-Cheung Chang, Min-Woo Chong, Jun-Young Jeong, Ji-Hea Park, Yu-Jeong Jeon
-
Patent number: 9075269Abstract: An array substrate includes a substrate; gate lines over the substrate along a first direction; data lines over the substrate along a second direction and crossing the gate lines to define pixel regions; a thin film transistor at each crossing portion of the gate and data lines; an insulating layer covering the thin film transistor and having a flat top surface; a common electrode on the insulating layer all over the substrate; a common line on the common electrode; a passivation layer on the common line; and a pixel electrode on the passivation layer in each pixel region and connected to the thin film transistor, the pixel electrode including electrode patterns, wherein the passivation layer has a step height at a top surface of the passivation layer due to the plurality of common lines.Type: GrantFiled: November 15, 2013Date of Patent: July 7, 2015Assignee: LG Display Co., Ltd.Inventors: Kyung-Mo Son, Jae-Kyun Lee, Sung-Chol Yi, Taek-Jun Jung, Sun-Ju Ku, Soon-Hwan Hong, Sang-Su Jang, Jun-Young Jeong, Eun-Hye Lee
-
Publication number: 20140168554Abstract: An array substrate includes a substrate; gate lines over the substrate along a first direction; data lines over the substrate along a second direction and crossing the gate lines to define pixel regions; a thin film transistor at each crossing portion of the gate and data lines; an insulating layer covering the thin film transistor and having a flat top surface; a common electrode on the insulating layer all over the substrate; a common line on the common electrode; a passivation layer on the common line; and a pixel electrode on the passivation layer in each pixel region and connected to the thin film transistor, the pixel electrode including electrode patterns, wherein the passivation layer has a step height at a top surface of the passivation layer due to the plurality of common lines.Type: ApplicationFiled: November 15, 2013Publication date: June 19, 2014Applicant: LG DISPLAY CO., LTD.Inventors: Kyung-Mo SON, Jae-Kyun LEE, Sung-Chol YI, Taek-Jun JUNG, Sun-Ju KU, Soon-Hwan HONG, Sang-Su JANG, Jun-Young JEONG, Eun-Hye LEE
-
Patent number: 8654915Abstract: A control signal receiver includes a converting circuit and a synchronization detection circuit. The converting circuit generates a complex control symbol stream including transmission configurations by converting an input signal. The synchronization detection circuit generates a first bit stream by applying a first determination criterion to the complex control symbol stream and generates a first synchronization signal by comparing the first bit stream with a reference synchronization word. The synchronization detection circuit generates a second bit stream by applying the first determination criterion and a second determination criterion to the complex control symbol stream in that order and generates a second synchronization signal by comparing the second bit stream with the reference synchronization word. The synchronization detection circuit outputs one of the first synchronization signal and the second synchronization signal as asynchronization enable signal.Type: GrantFiled: May 15, 2012Date of Patent: February 18, 2014Assignee: Samsung Electronics Co., Ltd.Inventor: Jun-Young Jeong