Patents by Inventor Jong Ho Kim

Jong Ho Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210409687
    Abstract: Provided are an image encoding method and device. When carrying out image encoding for a block within a slice, at least one block in a restored block of the slice is set as a reference block. When this is done, the encoding parameters of the reference block are distinguished, and the block to be encoded is encoded adaptively based on the encoding parameters.
    Type: Application
    Filed: September 14, 2021
    Publication date: December 30, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Sung Chang LIM, Jong Ho KIM, Hae Chul CHOI, Hui Yong KIM, Ha Hyun LEE, Jin Ho LEE, Se Yoon JEONG, Suk Hee CHO, Jin Soo CHOI, Jin Woo HONG, Jin Woong KIM
  • Patent number: 11206424
    Abstract: An inter prediction method according to the present invention comprises: a step for deriving reference motion information related to a unit to be decoded in a current picture; and a step for performing motion compensation for the unit to be decoded, using the reference motion information that has been derived. According to the present invention, image encoding/decoding efficiency can be enhanced.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 21, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sung Chang Lim, Hui Yong Kim, Se Yoon Jeong, Suk Hee Cho, Jong Ho Kim, Ha Hyun Lee, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn
  • Patent number: 11205257
    Abstract: Disclosed herein are a method and apparatus for measuring video quality based on a perceptually sensitive region. The quality of video may be measured based on a perceptually sensitive region and a change in the perceptually sensitive region. The perceptually sensitive region includes a spatial perceptually sensitive region, a temporal perceptually sensitive region, and a spatio-temporal perceptually sensitive region. Perceptual weights are applied to a detected perceptually sensitive region and a change in the detected perceptually sensitive region. Distortion is calculated based on the perceptually sensitive region and the change in the perceptually sensitive region, and a result of quality measurement for a video is generated based on the calculated distortion.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: December 21, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Se-Yoon Jeong, Dae-Yeol Lee, Seung-Hyun Cho, Hyunsuk Ko, Youn-Hee Kim, Jong-Ho Kim, Jin-Wuk Seok, Joo-Young Lee, Woong Lim, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20210344935
    Abstract: Provided is a transform coefficient scan method including: determining a reference transform block for a decoding target block; deriving a scanning map of the decoding target block using scanning information of the reference transform block; and performing inverse scanning on a transform coefficient of the decoding target block using the derived scanning map. According to the present invention, picture encoding/decoding efficiency may be improved.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Sung Chang LIM, Hui Yong KIM, Se Yoon JEONG, Jong Ho KIM, Ha Hyun LEE, Jin Ho LEE, Jin Soo CHOI, Jin Woong KIM
  • Patent number: 11166014
    Abstract: Disclosed herein are a method and apparatus for video decoding and a method and apparatus for video encoding. A prediction block for a target block is generated by predicting the target block using a prediction network, and a reconstructed block for the target block is generated based on the prediction block and a reconstructed residual block. The prediction network includes an intra-prediction network and an inter-prediction network and uses a spatial reference block and/or a temporal reference block when it performs prediction. For learning in the prediction network, a loss function is defined, and learning in the prediction network is performed based on the loss function.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: November 2, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung-Hyun Cho, Joo-Young Lee, Youn-Hee Kim, Jin-Wuk Seok, Woong Lim, Jong-Ho Kim, Dae-Yeol Lee, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20210306622
    Abstract: A method for coding image information includes generating prediction information by predicting information on a current coding unit, and determining whether the information on the current coding unit is the same as the prediction information. When the information on the current coding unit is the same as the prediction information, a flag indicating that the information on the current coding unit is the same as the prediction information is coded and transmitted. When the information on the current coding unit is not the same as the prediction information, a flag indicating that the information on the current coding unit is not the same as the prediction information and the information on the current coding unit are coded and transmitted.
    Type: Application
    Filed: June 10, 2021
    Publication date: September 30, 2021
    Applicants: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Se Yoon JEONG, Hui Yong KIM, Sung Chang LIM, Jin Ho LEE, Ha Hyun LEE, Jong Ho KIM, Jin Soo CHOI, Jin Woong KIM, Chie Teuk AHN, Gwang Hoon PARK, Kyung Yong KIM, Han Soo LEE, Tae Ryong KIM
  • Publication number: 20210279912
    Abstract: An encoding apparatus extracts features of an image by applying multiple padding operations and multiple downscaling operations to an image represented by data and transmits feature information indicating the features to a decoding apparatus. The multiple padding operations and the multiple downscaling operations are applied to the image in an order in which one padding operation is applied and thereafter one downscaling operation corresponding to the padding operation is applied. A decoding method receives feature information from an encoding apparatus, and generates a to reconstructed image by applying multiple upscaling operations and multiple trimming operations to an image represented by the feature information. The multiple upscaling operations and the multiple trimming operations are applied to the image in an order in which one upscaling operation is applied and thereafter one trimming operation corresponding to the upscaling operation is applied.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 9, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Joo-Young Lee, Se-Yoon Jeong, Hyoung-Jin Kwon, Dong-Hyun Kim, Youn-Hee Kim, Jong-Ho Kim, Tae-Jin Lee, Jin-Soo Choi
  • Publication number: 20210274179
    Abstract: Disclosed are a method for determining a color difference component quantization parameter and a device using the method. Method for decoding an image can comprise the steps of: decoding a color difference component quantization parameter offset on the basis of size information of a transform unit; and calculating a color difference component quantization parameter index on the basis of the decoded color difference component quantization parameter offset. Therefore, the present invention enables effective quantization by applying different color difference component quantization parameters according to the size of the transform unit when executing the quantization.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Sung Chang LIM, Hui Yong KIM, Se Yoon JEONG, Jong Ho KIM, Ha Hyun LEE, Jin Ho LEE, Jin Soo CHOI, Jin Woong KIM
  • Publication number: 20210274181
    Abstract: Disclosed are a method for determining a color difference component quantization parameter and a device using the method. Method for decoding an image can comprise the steps of: decoding a color difference component quantization parameter offset on the basis of size information of a transform unit; and calculating a color difference component quantization parameter index on the basis of the decoded color difference component quantization parameter offset. Therefore, the present invention enables effective quantization by applying different color difference component quantization parameters according to the size of the transform unit when executing the quantization.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Sung Chang LIM, Hui Yong KIM, Se Yoon JEONG, Jong Ho KIM, Ha Hyun LEE, Jin Ho LEE, Jin Soo CHOI, Jin Woong KIM
  • Publication number: 20210274180
    Abstract: Disclosed are a method for determining a color difference component quantization parameter and a device using the method. Method for decoding an image can comprise the steps of: decoding a color difference component quantization parameter offset on the basis of size information of a transform unit; and calculating a color difference component quantization parameter index on the basis of the decoded color difference component quantization parameter offset. Therefore, the present invention enables effective quantization by applying different color difference component quantization parameters according to the size of the transform unit when executing the quantization.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Sung Chang LIM, Hui Yong KIM, Se Yoon JEONG, Jong Ho KIM, Ha Hyun LEE, Jin Ho LEE, Jin Soo CHOI, Jin Woong KIM
  • Patent number: 11102494
    Abstract: The method for scanning a transform coefficient of the present invention comprises the steps of: determining a reference transform block for a block to be decoded; deriving a scanning map of the block to be decoded using scanning information of the reference transform block; and executing a reverse-scan on the transform coefficient of the block to be decoded using the derived scanning map. The present invention enhances image encoding/decoding efficiency.
    Type: Grant
    Filed: March 5, 2012
    Date of Patent: August 24, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATION RESEARCH INSTITUTE
    Inventors: Sung Chang Lim, Hui Yong Kim, Se Yoon Jeong, Jong Ho Kim, Ha Hyun Lee, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim
  • Patent number: 11102471
    Abstract: According to one embodiment of the present invention, a video information encoding method comprises: a step of predicting information of the current coding unit to generate prediction information; and a step of determining whether the information of the current coding unit coincides with the prediction information. If the information of the current coding unit coincides with the prediction information, a flag indicating that the information of the current coding unit coincides with the prediction information is encoded and transmitted. If the information of the current coding unit does not coincide with the prediction information, a flag indicating that the information of the current coding unit does not coincide with the prediction information is encoded and transmitted and the information of the current coding unit is encoded and transmitted.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: August 24, 2021
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Se Yoon Jeong, Hui Yong Kim, Sung Chang Lim, Jin Ho Lee, Ha Hyun Lee, Jong Ho Kim, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn, Gwang Hoon Park, Kyung Yong Kim, Han Soo Lee, Tae Ryong Kim
  • Patent number: 11082686
    Abstract: According to one embodiment of the present invention, a video information encoding method comprises: a step of predicting information of the current coding unit to generate prediction information; and a step of determining whether the information of the current coding unit coincides with the prediction information. If the information of the current coding unit coincides with the prediction information, a flag indicating that the information of the current coding unit coincides with the prediction information is encoded and transmitted. If the information of the current coding unit does not coincide with the prediction information, a flag indicating that the information of the current coding unit does not coincide with the prediction information is encoded and transmitted and the information of the current coding unit is encoded and transmitted.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: August 3, 2021
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Se Yoon Jeong, Hui Yong Kim, Sung Chang Lim, Jin Ho Lee, Ha Hyun Lee, Jong Ho Kim, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn, Gwang Hoon Park, Kyung Yong Kim, Han Soo Lee, Tae Ryong Kim
  • Patent number: 11064191
    Abstract: A method for coding image information includes generating prediction information by predicting information on a current coding unit, and determining whether the information on the current coding unit is the same as the prediction information. When the information on the current coding unit is the same as the prediction information, a flag indicating that the information on the current coding unit is the same as the prediction information is coded and transmitted. When the information on the current coding unit is not the same as the prediction information, a flag indicating that the information on the current coding unit is not the same as the prediction information and the information on the current coding unit are coded and transmitted. At the generating of the prediction information, the prediction information is generated by using the information on the coding unit neighboring to the current coding unit.
    Type: Grant
    Filed: October 2, 2013
    Date of Patent: July 13, 2021
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Se Yoon Jeong, Hui Yong Kim, Sung Chang Lim, Jin Ho Lee, Ha Hyun Lee, Jong Ho Kim, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn, Gwang Hoon Park, Kyung Yong Kim, Han Soo Lee, Tae Ryong Kim
  • Patent number: 11051019
    Abstract: Disclosed are a method for determining a color difference component quantization parameter and a device using the method. Method for decoding an image can comprise the steps of: decoding a color difference component quantization parameter offset on the basis of size information of a transform unit; and calculating a color difference component quantization parameter index on the basis of the decoded color difference component quantization parameter offset. Therefore, the present invention enables effective quantization by applying different color difference component quantization parameters according to the size of the transform unit when executing the quantization.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: June 29, 2021
    Assignee: INTELLECTUAL DISCOVERY CO., LTD.
    Inventors: Sung Chang Lim, Hui Yong Kim, Se Yoon Jeong, Jong Ho Kim, Ha Hyun Lee, Jin Ho Lee, Jin Soo Choi, Jin Woong Kim
  • Publication number: 20210176494
    Abstract: Provided is a method and apparatus for performing intra-prediction using an adaptive filter. The method for performing intra-prediction includes the steps of: determining whether or not to apply a first filter for a reference pixel value on the basis of information of a neighboring block of a current block; applying the first filter for the reference pixel value when it is determined to apply the first filter; performing intra-prediction on the current block on the basis of the reference pixel value; determining whether or not to apply a second filter for a prediction value according to each prediction mode of the current block, which is predicted by the intra-prediction performance on the basis of the information of the neighboring block; and applying the second filter for the prediction value according to each prediction mode of the current block when it is determined to apply the second filter.
    Type: Application
    Filed: February 10, 2021
    Publication date: June 10, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin Ho LEE, Hui Yong KIM, Se Yoon JEONG, Suk Hee CHO, Ha Hyun LEE, Jong Ho KIM, Sung Chang LIM, Jin Soo CHOI, Jin Woong KIM, Chie Teuk AHN
  • Patent number: 11025901
    Abstract: According to one embodiment of the present invention, a video information encoding method comprises: a step of predicting information of the current coding unit to generate prediction information; and a step of determining whether the information of the current coding unit coincides with the prediction information. If the information of the current coding unit coincides with the prediction information, a flag indicating that the information of the current coding unit coincides with the prediction information is encoded and transmitted. If the information of the current coding unit does not coincide with the prediction information, a flag indicating that the information of the current coding unit does not coincide with the prediction information is encoded and transmitted and the information of the current coding unit is encoded and transmitted.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: June 1, 2021
    Assignees: Electronics and Telecommunications Research Institute, University-Industry Cooperation Group of Kyung Hee University
    Inventors: Se Yoon Jeong, Hui Yong Kim, Sung Chang Lim, Jin Ho Lee, Ha Hyun Lee, Jong Ho Kim, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn, Gwang Hoon Park, Kyung Yong Kim, Han Soo Lee, Tae Ryong Kim
  • Publication number: 20210160485
    Abstract: Disclosed herein is an intra prediction method including: deriving neighbor prediction mode information from a left neighbor prediction mode and an upper neighbor prediction mode; deriving an intra prediction mode for a decoding target unit by using the derived neighbor prediction mode information; and performing intra prediction on the decoding target unit based on the intra prediction mode. According to exemplary embodiments of the present invention, image encoding/decoding efficiency can be improved.
    Type: Application
    Filed: January 29, 2021
    Publication date: May 27, 2021
    Inventors: Ha Hyun Lee, Hui Yong Kim, Sung Chang Lim, Jong Ho Kim, Jin Ho Lee, Se Yoon Jeong, Suk Hee Cho, Jin Soo Choi, Jin Woong Kim, Chie Teuk Ahn
  • Patent number: 11019355
    Abstract: An inter-prediction method and apparatus uses a reference frame generated based on deep learning. In the inter-prediction method and apparatus, a reference frame is selected, and a virtual reference frame is generated based on the selected reference frame. A reference picture list is configured to include the generated virtual reference frame, and inter prediction for a target block is performed based on the virtual reference frame. The virtual reference frame may be generated based on a deep-learning network architecture, and may be generated based on video interpolation and/or video extrapolation that use the selected reference frame.
    Type: Grant
    Filed: April 3, 2019
    Date of Patent: May 25, 2021
    Assignee: Electronics and Telecommunications Research institute
    Inventors: Seung-Hyun Cho, Je-Won Kang, Na-Young Kim, Jung-Kyung Lee, Joo-Young Lee, Hyunsuk Ko, Youn-Hee Kim, Jong-Ho Kim, Jin-Wuk Seok, Dae-Yeol Lee, Woong Lim, Se-Yoon Jeong, Hui-Yong Kim, Jin-Soo Choi
  • Publication number: 20210136416
    Abstract: Disclosed herein are a video decoding method and apparatus and a video encoding method and apparatus. A transformed block is generated by performing a first transformation that uses a prediction block for a target block. A reconstructed block for the target block is generated by performing a second transformation that uses the transformed block. The prediction block may be a block present in a reference image, or a reconstructed block present in a target image. The first transformation and the second transformation may be respectively performed by neural networks. Since each transformation is automatically performed by the corresponding neural network, information required for a transformation may be excluded from a bitstream.
    Type: Application
    Filed: November 27, 2018
    Publication date: May 6, 2021
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Youn-Hee KIM, Hui-Yong KIM, Seung-Hyun CHO, Jin-Wuk SEOK, Joo-Young LEE, Woong LIM, Jong-Ho KIM, Dae-Yeol LEE, Se-Yoon JEONG, Jin-Soo CHOI