Patents by Inventor Kug Jin Yun

Kug Jin Yun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240316631
    Abstract: The present invention relates to a metal nanowire having a core-shell structure, the metal nanowire being conductive and transparent while having excellent oxidation stability, and being capable of maintaining excellent oxidation stability even after a secondary process using the metal nanowire of the present invention. Specifically, the metal nanowire having a core-shell structure, according to the present invention, comprises a core containing copper, and a shell containing silver on the core, wherein the ratio (D/L) of the diameter (D) of the core to the thickness (L) of the shell is 10-60, the thickness of the shell is 5-40 nm, and the peak intensity (I1) of Ag 3d5/2 of silver and the peak intensity (I2) of Cu 2p3/2 of copper in the X-ray photoelectron spectrum satisfy the following Equation 1. I 2 / I 1 ? 0 .
    Type: Application
    Filed: July 20, 2022
    Publication date: September 26, 2024
    Applicant: BIONEER CORPORATION
    Inventors: Han-Oh PARK, Jae-Ha KIM, jun Pyo KIM, Kug Jin YUN
  • Patent number: 12101485
    Abstract: Disclosed herein are a video encoding/decoding method and apparatus. The video decoding method according to the present disclosure includes: when a current picture is composed of a plurality of tiles and a current tile among the plurality of tiles is partitioned into a plurality of slices, decoding information on the number of slices in tile that indicates the number of the plurality of slices comprised in the current tile; decoding information on a slice height in tile that indicates a height of the plurality of slices comprised in the current tile; and determining the number of the plurality of slices comprised in the current tile and a height of the plurality of slices comprised in the current tile.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: September 24, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Won Sik Cheong, Kug Jin Yun
  • Patent number: 11838485
    Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: December 5, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon Lee, Jun Young Jeong, Kug Jin Yun, Hong Chang Shin, Ho Min Eum
  • Patent number: 11616938
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: March 28, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun
  • Patent number: 11575935
    Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: February 7, 2023
    Assignees: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun, Jun Young Yun, Jong Il Park
  • Publication number: 20230032884
    Abstract: Disclosed herein are a video encoding/decoding method and apparatus. The video decoding method according to the present disclosure includes: when a current picture is composed of a plurality of tiles and a current tile among the plurality of tiles is partitioned into a plurality of slices, decoding information on the number of slices in tile that indicates the number of the plurality of slices comprised in the current tile; decoding information on a slice height in tile that indicates a height of the plurality of slices comprised in the current tile; and determining the number of the plurality of slices comprised in the current tile and a height of the plurality of slices comprised in the current tile.
    Type: Application
    Filed: December 23, 2020
    Publication date: February 2, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Won Sik CHEONG, Kug Jin YUN
  • Publication number: 20230014096
    Abstract: Disclosed herein are an apparatus for estimating a camera pose using multi-view images of a 2D array structure and a method using the same. The method performed by the apparatus includes acquiring multi-view images from a 2D array camera system, forming a 2D image link structure corresponding to the multi-view images in consideration of the geometric structure of the camera system, estimating an initial camera pose based on an adjacent image extracted from the 2D image link structure and a pair of corresponding feature points, and estimating a final camera pose by reconstructing a 3D structure based on the initial camera pose and performing correction so as to minimize a reprojection error of the reconstructed 3D structure.
    Type: Application
    Filed: March 23, 2022
    Publication date: January 19, 2023
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, SOGANG UNIVERSITY RESEARCH FOUNDATION
    Inventors: Joon-Soo KIM, Kug-Jin YUN, Jun-Young JEONG, Suk-Ju KANG, Jung-Hee KIM, Woo-June PARK
  • Publication number: 20230011343
    Abstract: A conductive paste composition according to the present disclosure contains silver-coated copper nanowires with a core-shell structure; a binder mixture containing a silicone resin binder and a hydrocarbon-based resin binder; and an organic solvent, such that the conductive paste composition has a low sheet resistance and may withstand a high temperature, thereby implementing excellent conductivity and electromagnetic wave shielding properties. Furthermore, the conductive paste may be widely used in various fields such as electromagnetic wave shielding, solar cell electrodes, electronic circuits.
    Type: Application
    Filed: October 31, 2019
    Publication date: January 12, 2023
    Inventors: Han Oh PARK, Jae Ha KIM, Jun Pyo KIM, Kug Jin YUN
  • Patent number: 11483534
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for source videos; extracting patches from the source videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, a first flag indicating whether or not an atlas includes a patch including information on an entire region of a first source video may be encoded into the metadata.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: October 25, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Kug Jin Yun, Jun Young Jeong, Gwang Soon Lee, Hong Chang Shin, Ho Min Eum
  • Patent number: 11405599
    Abstract: An MPEG media transport (MMT) apparatus and method for processing stereoscopic video data are provided. The MMT apparatus includes an asset file generator configured to generate a single asset file that contains an entire or part of the stereoscopic video data; and a signaling message generator configured to generate a signaling message for delivery or consumption of the stereoscopic video data. At least one of the generated single asset file and the generated signaling message contains stereoscopic video information related to the stereoscopic video data.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: August 2, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jin Young Lee, Kug Jin Yun, Won Sik Cheong
  • Patent number: 11350074
    Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method may include classifying a multiplicity of view videos into a base view and an additional view, generating a residual video for the additional view video classified as an additional view, packing a patch, which is generated based on the residual video, into an atlas video, and generating metadata for the patch.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: May 31, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang Shin, Gwang Soon Lee, Sang Woon Kwak, Kug Jin Yun, Jun Young Jeong
  • Patent number: 11218748
    Abstract: Provided is a method of supporting a random access of MPEG data, the method including: obtaining at least one access unit including media data coded through processing including an encapsulation and a packetization; generating at least one media processing unit (MPU) by grouping at least one access unit; determining an initialization flag indicating whether the at least one access unit includes all of data required for initialization of a decoding process, in the at least one MPU; and inserting the initialization flag into a header of the at least one MPU.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: January 4, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jin Young Lee, Kug Jin Yun, Won Sik Cheong, Nam Ho Hur
  • Patent number: 11218744
    Abstract: Provided is a method of processing MPEG data, the method including: obtaining at least one access unit including media data coded through processing including an encapsulation and a packetization; generating at least one media processing unit (MPU) by grouping at least one access unit; determining a duration flag indicating whether duration information of the at least one access unit is valid, in a corresponding MPU; and inserting the duration flag into a header of the corresponding MPU.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: January 4, 2022
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jin Young Lee, Kug Jin Yun, Won Sik Cheong, Nam Ho Hur
  • Patent number: 11212505
    Abstract: Disclosed herein is an immersive video formatting method and apparatus for supporting motion parallax, The immersive video formatting method includes acquiring a basic video at a basic position, acquiring a multiple view video at at least one position different from the basic position, acquiring at least one residual video plus depth (RVD) video using the basic video and the multiple view video, and generating at least one of a packed video plus depth (PVD) video or predetermined metadata using the acquired basic video and the at least one RVD video.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: December 28, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon Lee, Hong Chang Shin, Kug Jin Yun, Jun Young Jeong
  • Publication number: 20210385490
    Abstract: A video decoding method comprises receiving a plurality of atlases and metadata, unpacking patches included in the plurality of atlases based on the plurality of atlases and the metadata, reconstructing view images including an image of a basic view and images of a plurality of additional views, by unpruning the patches based on the metadata, and synthesizing an image of a target playback view based on the view images. The metadata is data related to priorities of the view images.
    Type: Application
    Filed: April 15, 2021
    Publication date: December 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Hong Chang SHIN, Gwang Soon LEE, Ho Min EUM, Jun Young JEONG, Kug Jin YUN
  • Publication number: 20210383122
    Abstract: A method of processing an immersive video includes classifying view images into a basic image and an additional image, performing pruning with respect to view images by referring to a result of classification, generating atlases based on a result of pruning, generating a merged atlas by merging the atlases into one atlas, and generating configuration information of the merged atlas.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 9, 2021
    Inventors: Jun Young JEONG, Kug Jin YUN, Gwang Soon LEE, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210360218
    Abstract: Disclosed herein is a method for rectifying a 2D multi-view image. The method for rectifying a 2D multi-view image according to an embodiment of the present disclosure may include: detecting uniformly the at least one feature point in each region unit distinguished by considering a distribution of feature points of a plurality of input images: removing an error of the least one feature point; determining a corresponding pair for a vertical or horizontal direction of the at least one feature point; by considering an arrangement relationship of the plurality of input images, projecting the at least one feature point onto a projection plane; determining a disparity error for a corresponding pair for the at least one feature point that is projected onto the projection plane; and, by considering the disparity error, performing image rectification based on the at least one feature point.
    Type: Application
    Filed: May 14, 2021
    Publication date: November 18, 2021
    Applicants: Electronics and Telecommunications Research Institute, SOGANG UNIVERSITY RESEARCH & BUSINESS DEVELOPMENT FOUNDATION
    Inventors: Joon Soo KIM, Kug Jin YUN, Jun Young JEONG, Suk Ju KANG, Jung Hee KIM, Yeo Hun YUN
  • Publication number: 20210329209
    Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 21, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Gwang Soon LEE, Jun Young JEONG, Kug Jin YUN, Hong Chang SHIN, Ho Min EUM
  • Publication number: 20210218995
    Abstract: A video encoding/decoding method and apparatus is provided. The image decoding method includes acquiring image data of images of a plurality of views, determining a basic view and a plurality of reference views among the plurality of views, determining a pruning order of the plurality of reference views, and parsing the image data based on the pruning order and decoding an image of the basic view and images of the plurality of reference views.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 15, 2021
    Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Hong Chang SHIN, Ho Min EUM, Gwang Soon LEE, Jin Hwan LEE, Jun Young JEONG, Kug Jin YUN, Jong Il PARK, Jun Young YUN
  • Patent number: 11064218
    Abstract: Disclosed herein is an image encoding/decoding method and apparatus for virtual view synthesis. The image decoding for virtual view synthesis may include decoding texture information and depth information of at least one or more basic view images and at least one or more additional view images from a bit stream and synthesizing a virtual view on the basis of the texture information and the depth information, wherein the basic view image and the additional view image comprise a non-empty region and an empty region, and wherein the synthesizing of the virtual view comprises determining the non-empty region through a specific value in the depth information and a threshold and synthesizing the virtual view by using the determined non-empty region.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: July 13, 2021
    Assignees: Electronics and Telecommunications Research Institute, Poznan University of Technology
    Inventors: Gwang Soon Lee, Jun Young Jeong, Hong Chang Shin, Kug Jin Yun, Marek Domanski, Olgierd Stankiewicz, Dawid Mieloch, Adrian Dziembowski, Adam Grzelka, Jakub Stankowski