Abstract: Provided is a method that derives an intra prediction mode of a prediction unit, determines a size of a current block using transform size information, generates a prediction block of the current block according to the intra prediction mode, generating a residual block of the current block according to the intra prediction mode and generating a reconstructed block of the current block using the prediction block and the residual block. The sizes of the prediction block and the residual block are set equal to a size of a transform unit. Therefore, the distance of intra prediction becomes short, and the amount of coding bits of residual block is reduced by generating a prediction block very similar to original block. Also, the signaling bits required to signal intra prediction mode decrease by generating MPM group adaptively according to the neighboring intra prediction modes.
Abstract: Provided is a method that derives an intra prediction mode of a prediction unit, selects an inverse scan pattern of a current transform unit among a diagonal scan, a vertical scan and a horizontal scan based on the intra prediction mode and a size of the transform unit, and generates a quantized block by inversely scanning significant flags, coefficients signs and coefficient levels according to the selected inverse scan pattern. If the transform unit is larger than a predetermined size, multiple subsets are generated and inversely scanned. Therefore, the amount of coding bits of the residual block is reduced by determining the scan pattern based on the size of the transform unit and the intra prediction mode, and by applying the scan pattern to each subset. Also, the signaling bits decreases by generating MPM group adaptively according to the neighboring intra prediction modes.
Abstract: Provided is a method derives a reference picture index and a motion vector of a current prediction unit, generates a prediction block of the current prediction unit using the reference picture index and the motion vector, generating a residual block by inverse-scan, inverse-quantization and inverse transform, and generates reconstructed pixels using the prediction block and the residual block. Prediction pixels of the prediction block is generated using an interpolation filter selected based on the motion vector. Accordingly, the coding efficiency of the motion information is improved by including various merge candidates. Also, the computational complexity of an encoder and a decoder is reduced by selecting different filter according to location of the prediction pixels determined by the motion vector.
Abstract: Provided is a method that constructs an MPM group including three intra prediction modes, determines the intra prediction mode of the MPM group specified by the prediction mode index as the intra prediction mode of the current prediction unit if the mode group indicator indicates the MPM group, and derives the intra prediction mode of the current prediction unit using the prediction mode index and the three prediction modes of the MPM group if the mode group indicator does not indicate the MPM group. Accordingly, additional bits resulted from increase of a number of intra prediction mode are effectively reduced. Also, an image compression ratio can be improved by generating a prediction block similar to an original block.
Abstract: Provided is a method that derives a chroma intra prediction mode of a prediction unit, determines a size of a current chroma block using luma transform size information, generates a chroma prediction block of the current chroma block using the chroma intra prediction mode, generates a chroma residual block of the current chroma block using the chroma intra prediction mode and a chroma quantization parameter, generates a chroma reconstructed block adding the chroma prediction block and the chroma residual block, and the chroma quantization parameter is generated using a luma quantization parameter and information indicating the relationship between the luma quantization parameter and the chroma quantization parameter. Therefore, the coding efficiency is improved by adjusting the chroma quantization parameter per picture. Also, the amount of bits for transmitting the luma and chroma quantization parameters is reduced by encoding the luma quantization parameter using neighboring luma quantization parameters.
Abstract: Provided is a method checks availability of spatial merge candidates and a temporal merge candidate, constructs a merge candidate list using available spatial and temporal merge candidates, and adds one or more candidates if the number of available spatial and temporal merge candidates is smaller than a predetermined number. The spatial merge candidate is motion information of a spatial merge candidate block, the spatial merge candidate block is a left block, an above block, an above-right block, a left-below block or an above-left block of the current block, and if the current block is a second prediction unit partitioned by asymmetric partitioning, the spatial merge candidate corresponding to a first prediction unit partitioned by the asymmetric partitioning is set as unavailable. Therefore, the coding efficiency of motion information is improved by removing unavailable merge candidates and adding new merge candidates from the merge list.
Abstract: Provided is an apparatus that derives a luma intra prediction mode and a chroma intra prediction mode, determines a size of a luma transform unit and a size of a chroma transform unit using luma transform size information, adaptively filters the reference pixels of a current luma block based on the luma intra prediction mode and the size of the luma transform unit, generates prediction blocks of the current luma block and the current current block and generates a residual luma residual block and a chroma residual block. Therefore, the distance of intra prediction becomes short, and the amount of coding bits required to encode intra prediction modes and residual blocks of luma and chroma components is reduced and the coding complexity is reduced by adaptively encoding the intra prediction modes and adaptively filtering the reference pixels.
Abstract: Provided is a method that constructs an MPM group including three intra prediction modes, determines the intra prediction mode of the MPM group specified by the prediction mode index as the intra prediction mode of the current prediction unit if the mode group indicator indicates the MPM group, and derives the intra prediction mode of the current prediction unit using the prediction mode index and the three prediction modes of the MPM group if the mode group indicator does not indicate the MPM group. Accordingly, additional bits resulted from increase of a number of intra prediction mode are effectively reduced. Also, an image compression ratio can be improved by generating a prediction block similar to an original block.
Abstract: Provided is a method that derives a chroma intra prediction mode of a prediction unit, determines a size of a current chroma block using luma transform size information, generates a chroma prediction block of the current chroma block using the chroma intra prediction mode, generates a chroma residual block of the current chroma block using the chroma intra prediction mode and a chroma quantization parameter, generates a chroma reconstructed block adding the chroma prediction block and the chroma residual block, and the chroma quantization parameter is generated using a luma quantization parameter and information indicating the relationship between the luma quantization parameter and the chroma quantization parameter. Therefore, the coding efficiency is improved by adjusting the chroma quantization parameter per picture. Also, the amount of bits for transmitting the luma and chroma quantization parameters is reduced by encoding the luma quantization parameter using neighboring luma quantization parameters.