Patents by Inventor Tae Young Na

Tae Young Na has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230139569
    Abstract: A method of inter-predicting a current block using any one of a plurality of bi-prediction modes is disclosed. The method comprises decoding, from a bitstream, mode information indicating whether a first mode included in the plurality of bi-prediction modes is applied to the current block. When the mode information indicates that the first mode is applied to the current block, the method further comprises: decoding, from the bitstream, first motion information including differential motion vector information and predicted motion vector information for a first motion vector and second motion information not including at least a portion of predicted motion vector information and differential motion vector information for a second motion vector; and deriving the first motion vector based on the first motion information and deriving the second motion vector based on both at least a portion of the first motion information and the second motion information.
    Type: Application
    Filed: December 27, 2022
    Publication date: May 4, 2023
    Inventors: Jae Il KIM, Sun Young LEE, Tae Young NA, Se Hoon SON, Jae Seob SHIN
  • Publication number: 20230132003
    Abstract: A method of inter-predicting a current block using any one of a plurality of bi-prediction modes is disclosed. The method comprises decoding, from a bitstream, mode information indicating whether a first mode included in the plurality of bi-prediction modes is applied to the current block. When the mode information indicates that the first mode is applied to the current block, the method further comprises: decoding, from the bitstream, first motion information including differential motion vector information and predicted motion vector information for a first motion vector and second motion information not including at least a portion of predicted motion vector information and differential motion vector information for a second motion vector; and deriving the first motion vector based on the first motion information and deriving the second motion vector based on both at least a portion of the first motion information and the second motion information.
    Type: Application
    Filed: December 27, 2022
    Publication date: April 27, 2023
    Inventors: Jae Il KIM, Sun Young LEE, Tae Young NA, Se Hoon SON, Jae Seob SHIN
  • Publication number: 20230105142
    Abstract: A filtering method comprises: determining at least one block boundary to which filtering is to be applied in a reconstructed image; setting a boundary strength of the block boundary based on prediction modes of two target luma blocks forming the block boundary among a plurality of prediction modes, a type of the block boundary and preset conditions; calculating an average value of quantization parameters which are respectively applied to the target luma blocks, and deriving a variable by adding the average value to an offset value derived based on offset information to be encoded at a level of a sequence parameter set that is a header which is referenced in common by the pictures belonging to the sequence; and determining whether to perform the filtering on the block boundary and performing the filtering based on the set boundary strength and the variable.
    Type: Application
    Filed: November 25, 2022
    Publication date: April 6, 2023
    Inventors: Tae Young NA, Sun Young LEE, Jae Il KIM
  • Publication number: 20230095255
    Abstract: A filtering method comprises: determining at least one block boundary to which filtering is to be applied in a reconstructed image; setting a boundary strength of the block boundary based on prediction modes of two target luma blocks forming the block boundary among a plurality of prediction modes, a type of the block boundary and preset conditions; calculating an average value of quantization parameters which are respectively applied to the target luma blocks, and deriving a variable by adding the average value to an offset value derived based on offset information to be encoded at a level of a sequence parameter set that is a header which is referenced in common by the pictures belonging to the sequence; and determining whether to perform the filtering on the block boundary and performing the filtering based on the set boundary strength and the variable.
    Type: Application
    Filed: November 25, 2022
    Publication date: March 30, 2023
    Inventors: Tae Young NA, Sun Young LEE, Jae Il KIM
  • Publication number: 20230090641
    Abstract: A filtering method comprises: determining at least one block boundary to which filtering is to be applied in a reconstructed image; setting a boundary strength of the block boundary based on prediction modes of two target luma blocks forming the block boundary among a plurality of prediction modes, a type of the block boundary and preset conditions; calculating an average value of quantization parameters which are respectively applied to the target luma blocks, and deriving a variable by adding the average value to an offset value derived based on offset information to be encoded at a level of a sequence parameter set that is a header which is referenced in common by the pictures belonging to the sequence; and determining whether to perform the filtering on the block boundary and performing the filtering based on the set boundary strength and the variable.
    Type: Application
    Filed: November 25, 2022
    Publication date: March 23, 2023
    Inventors: Tae Young NA, Sun Young LEE, Jae Il KIM
  • Publication number: 20230087432
    Abstract: A filtering method comprises: determining at least one block boundary to which filtering is to be applied in a reconstructed image; setting a boundary strength of the block boundary based on prediction modes of two target luma blocks forming the block boundary among a plurality of prediction modes, a type of the block boundary and preset conditions; calculating an average value of quantization parameters which are respectively applied to the target luma blocks, and deriving a variable by adding the average value to an offset value derived based on offset information to be encoded at a level of a sequence parameter set that is a header which is referenced in common by the pictures belonging to the sequence; and determining whether to perform the filtering on the block boundary and performing the filtering based on the set boundary strength and the variable.
    Type: Application
    Filed: November 25, 2022
    Publication date: March 23, 2023
    Inventors: Tae Young NA, Sun Young LEE, Jae Il KIM
  • Publication number: 20230057302
    Abstract: Disclosed is a method and apparatus for encoding/decoding a video. According to an embodiment, provided is a method of setting a level for each of one or more regions, including decoding a definition syntax element related to level definition and a designation syntax element related to target designation from a bitstream; defining one or more levels based on the definition syntax element; and setting a target level designated by the designation syntax element among the defined levels for a target region designated by the designation syntax element.
    Type: Application
    Filed: October 20, 2022
    Publication date: February 23, 2023
    Inventors: Jeong-yeon LIM, Jae Seob SHIN, Sun Young LEE, Se Hoon SON, Tae Young NA, Jae Il KIM
  • Publication number: 20230054735
    Abstract: Disclosed is a method and apparatus for encoding/decoding a video. According to an embodiment, provided is a method of setting a level for each of one or more regions, including decoding a definition syntax element related to level definition and a designation syntax element related to target designation from a bitstream; defining one or more levels based on the definition syntax element; and setting a target level designated by the designation syntax element among the defined levels for a target region designated by the designation syntax element.
    Type: Application
    Filed: October 20, 2022
    Publication date: February 23, 2023
    Inventors: Jeong-yeon LIM, Jae Seob SHIN, Sun Young LEE, Se Hoon SON, Tae Young NA, Jae Il KIM
  • Publication number: 20230055162
    Abstract: Disclosed is a method and apparatus for encoding/decoding a video. According to an embodiment, provided is a method of setting a level for each of one or more regions, including decoding a definition syntax element related to level definition and a designation syntax element related to target designation from a bitstream; defining one or more levels based on the definition syntax element; and setting a target level designated by the designation syntax element among the defined levels for a target region designated by the designation syntax element.
    Type: Application
    Filed: October 20, 2022
    Publication date: February 23, 2023
    Inventors: Jeong-yeon LIM, Jae Seob SHIN, Sun Young LEE, Se Hoon SON, Tae Young NA, Jae Il KIM
  • Patent number: 11575904
    Abstract: A method of inter-predicting a current block using any one of a plurality of bi-prediction modes is disclosed. The method comprises decoding, from a bitstream, mode information indicating whether a first mode included in the plurality of bi-prediction modes is applied to the current block. When the mode information indicates that the first mode is applied to the current block, the method further comprises: decoding, from the bitstream, first motion information including differential motion vector information and predicted motion vector information for a first motion vector and second motion information not including at least a portion of predicted motion vector information and differential motion vector information for a second motion vector; and deriving the first motion vector based on the first motion information and deriving the second motion vector based on both at least a portion of the first motion information and the second motion information.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: February 7, 2023
    Assignee: SK TELECOM CO., LTD.
    Inventors: Jae Il Kim, Sun Young Lee, Tae Young Na, Se Hoon Son, Jae Seob Shin
  • Patent number: 11546589
    Abstract: A filtering method comprises: determining at least one block boundary to which filtering is to be applied in a reconstructed image; setting a boundary strength of the block boundary based on prediction modes of two target luma blocks forming the block boundary among a plurality of prediction modes, a type of the block boundary and preset conditions; calculating an average value of quantization parameters which are respectively applied to the target luma blocks, and deriving a variable by adding the average value to an offset value derived based on offset information to be encoded at a level of a sequence parameter set that is a header which is referenced in common by the pictures belonging to the sequence; and determining whether to perform the filtering on the block boundary and performing the filtering based on the set boundary strength and the variable.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: January 3, 2023
    Assignee: SK TELECOM CO., LTD.
    Inventors: Tae Young Na, Sun Young Lee, Jae Il Kim
  • Patent number: 11509937
    Abstract: Disclosed is a method and apparatus for encoding/decoding a video. According to an embodiment, provided is a method of setting a level for each of one or more regions, including decoding a definition syntax element related to level definition and a designation syntax element related to target designation from a bitstream; defining one or more levels based on the definition syntax element; and setting a target level designated by the designation syntax element among the defined levels for a target region designated by the designation syntax element.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: November 22, 2022
    Assignee: SK TELECOM CO., LTD.
    Inventors: Jeong-yeon Lim, Jae Seob Shin, Sun Young Lee, Se Hoon Son, Tae Young Na, Jae Il Kim
  • Publication number: 20220360791
    Abstract: Disclosed are an inter-prediction method and an video decoding device. One embodiment of the present invention provides an inter-prediction method executed in an video decoding device, including deriving a motion vector of a current block based on motion information decoded from a bitstream; acquiring reference samples of a first reference block by using the motion vector, wherein reference samples of an external region located outside a reference picture among the first reference block are acquired from a corresponding region corresponding to the external region within the reference picture; and predicting the current block based on the acquired reference samples.
    Type: Application
    Filed: July 14, 2022
    Publication date: November 10, 2022
    Inventors: Jeong Yeon LIM, Se Hoon SON, Sun Young LEE, Tae Young NA
  • Publication number: 20220360790
    Abstract: Disclosed are an inter-prediction method and an video decoding device. One embodiment of the present invention provides an inter-prediction method executed in an video decoding device, including deriving a motion vector of a current block based on motion information decoded from a bitstream; acquiring reference samples of a first reference block by using the motion vector, wherein reference samples of an external region located outside a reference picture among the first reference block are acquired from a corresponding region corresponding to the external region within the reference picture; and predicting the current block based on the acquired reference samples.
    Type: Application
    Filed: July 14, 2022
    Publication date: November 10, 2022
    Inventors: Jeong Yeon LIM, Se Hoon SON, Sun Young LEE, Tae Young NA
  • Publication number: 20220353516
    Abstract: A video decoding apparatus and a method for adaptively setting a resolution are disclosed. According to one embodiment of the present invention, a method for adaptively setting a resolution on a per-picture basis comprises the steps of: decoding maximum resolution information from a bitstream; decoding, from the bitstream, resolution information about a current picture; and setting the resolution of the current picture on the basis of the maximum resolution information or the resolution information, wherein the resolution information has a size less than or equal to that of the maximum resolution information.
    Type: Application
    Filed: June 26, 2020
    Publication date: November 3, 2022
    Inventors: Jae Il KIM, Sun Young LEE, Kyung Hwan KO, A Ram BAEK, Se Hoon SON, Jae Seob SHIN, Tae Young NA
  • Publication number: 20220353510
    Abstract: Disclosed are an inter-prediction method and an video decoding device. One embodiment of the present invention provides an inter-prediction method executed in an video decoding device, including deriving a motion vector of a current block based on motion information decoded from a bitstream; acquiring reference samples of a first reference block by using the motion vector, wherein reference samples of an external region located outside a reference picture among the first reference block are acquired from a corresponding region corresponding to the external region within the reference picture; and predicting the current block based on the acquired reference samples.
    Type: Application
    Filed: July 14, 2022
    Publication date: November 3, 2022
    Inventors: Jeong Yeon LIM, Se Hoon SON, Sun Young LEE, Tae Young NA
  • Publication number: 20220353511
    Abstract: Disclosed are an inter-prediction method and an video decoding device. One embodiment of the present invention provides an inter-prediction method executed in an video decoding device, including deriving a motion vector of a current block based on motion information decoded from a bitstream; acquiring reference samples of a first reference block by using the motion vector, wherein reference samples of an external region located outside a reference picture among the first reference block are acquired from a corresponding region corresponding to the external region within the reference picture; and predicting the current block based on the acquired reference samples.
    Type: Application
    Filed: July 14, 2022
    Publication date: November 3, 2022
    Inventors: Jeong Yeon LIM, Se Hoon SON, Sun Young LEE, Tae Young NA
  • Publication number: 20220295111
    Abstract: Disclosed herein is a method for decoding a video including determining a coding unit to be decoded by block partitioning, decoding prediction syntaxes for the coding unit, the prediction syntaxes including a skip flag indicating whether the coding unit is encoded in a skip mode, after the decoding of the prediction syntaxes, decoding transform syntaxes including a transformation/quantization skip flag and a coding unit cbf, wherein the transformation/quantization skip flag indicates whether inverse transformation, inverse quantization, and at least part of in-loop filterings are skipped, and the coding unit cbf indicates whether all coefficients in a luma block and two chroma blocks constituting the coding unit are zero, and reconstructing the coding unit based on the prediction syntaxes and the transform syntaxes.
    Type: Application
    Filed: June 2, 2022
    Publication date: September 15, 2022
    Inventors: Sun Young LEE, Jeong-yeon LIM, Tae Young NA, Gyeong-taek LEE, Jae-seob SHIN, Se Hoon SON, Hyo Song KIM
  • Publication number: 20220295112
    Abstract: Disclosed herein is a method for decoding a video including determining a coding unit to be decoded by block partitioning, decoding prediction syntaxes for the coding unit, the prediction syntaxes including a skip flag indicating whether the coding unit is encoded in a skip mode, after the decoding of the prediction syntaxes, decoding transform syntaxes including a transformation/quantization skip flag and a coding unit cbf, wherein the transformation/quantization skip flag indicates whether inverse transformation, inverse quantization, and at least part of in-loop filterings are skipped, and the coding unit cbf indicates whether all coefficients in a luma block and two chroma blocks constituting the coding unit are zero, and reconstructing the coding unit based on the prediction syntaxes and the transform syntaxes.
    Type: Application
    Filed: June 2, 2022
    Publication date: September 15, 2022
    Inventors: Sun Young LEE, Jeong-yeon LIM, Tae Young NA, Gyeong-taek LEE, Jae-seob SHIN, Se Hoon SON, Hyo Song KIM
  • Publication number: 20220295110
    Abstract: Disclosed herein is a method for decoding a video including determining a coding unit to be decoded by block partitioning, decoding prediction syntaxes for the coding unit, the prediction syntaxes including a skip flag indicating whether the coding unit is encoded in a skip mode, after the decoding of the prediction syntaxes, decoding transform syntaxes including a transformation/quantization skip flag and a coding unit cbf, wherein the transformation/quantization skip flag indicates whether inverse transformation, inverse quantization, and at least part of in-loop filterings are skipped, and the coding unit cbf indicates whether all coefficients in a luma block and two chroma blocks constituting the coding unit are zero, and reconstructing the coding unit based on the prediction syntaxes and the transform syntaxes.
    Type: Application
    Filed: June 2, 2022
    Publication date: September 15, 2022
    Inventors: Sun Young LEE, Jeong-yeon LIM, Tae Young NA, Gyeong-taek LEE, Jae-seob SHIN, Se Hoon SON, Hyo Song KIM