METHODS AND SYSTEMS FOR REDUCING BLOCKING ARTIFACTS WITH REDUCED COMPLEXITY FOR SPATIALLY-SCALABLE VIDEO CODING
A method for characterizing of a block boundary between neighboring blocks when at least one of said neighboring blocks is encoded using inter-layer texture prediction (I_BL) including characterizing the block boundary with a first boundary strength indicator when a luma sample from one of the neighboring blocks is encoded using an intra-prediction mode other than the I_BL characterizing the block boundary with a second boundary strength indicator when no luma sample from the neighboring blocks has intra-prediction mode encoding other than the I_BL, and any of the neighboring blocks and blocks from which the neighboring blocks are predicted have non-zero transform coefficients or characterizing the block boundary with a third boundary strength indicator when no luma sample from the neighboring blocks is encoded using an intra-prediction mode other than the I_BL and all of the neighboring blocks and blocks from which the neighboring blocks are predicted have no transform coefficients.
This application is a Divisional of co-pending application Ser. No. 11/350,181, filed on Feb. 7, 2006, which is a regular utility application of U.S. Provisional Application No. 60/663,161; filed Mar. 18, 2005, U.S. Provisional Application No. 60/683,060, filed May 20, 2005; U.S. Provisional Application No. 60/686,676, filed Jun. 1, 2005; and is a continuation-in-part of U.S. patent application Ser. No. 10/112,683, filed on Mar. 29, 2002, which is a continuation of U.S. patent application Ser. No. 09/817,701, filed on Mar. 26, 2001; which is a continuation-in-part of U.S. patent application Ser. No. 10/799,384, filed on Mar. 11, 2004, which is a continuation of PCT Patent Application No. PCT/JP02/09306, filed on Sep. 11, 2002; which is a continuation of U.S. patent application Ser. No. 09/953,329, filed on Sep. 14, 2001, the entire contents of which are hereby incorporated by reference.
FIELD OF THE INVENTIONEmbodiments of the present invention comprise methods and systems for image block boundary filtering control. Some embodiments of the present invention comprise methods and systems for characterizing a block boundary between neighboring blocks within a spatial scalability enhancement layer for controlling deblocking filter operations.
BACKGROUNDH.264/MPEG-4 AVC [Joint Video Team of ITU-T VCEG and ISO/IEC MPEG, “Advanced Video Coding (AVC)—4th Edition,” ITU-T Rec. H.264 and ISO/IEC 14496-10 (MPEG4-Part 10), January 2005], which is incorporated by reference herein, is a video codec specification that uses macroblock prediction followed by residual coding to reduce temporal and spatial redundancy in a video sequence for compression efficiency. Spatial scalability refers to a functionality in which parts of a bitstream may be removed while maintaining rate-distortion performance at any supported spatial resolution. Single-layer H.264/MPEG-4 AVC does not support spatial scalability. Spatial scalability is supported by the Scalable Video Coding (SVC) extension of H.264/MPEG-4 AVC.
The SVC extension of H.264/MPEG-4 AVC [Working Document 1.0 (WD-1.0) (MPEG Doc. N6901) for the Joint Scalable Video Model (JSVM)], which is incorporated by reference herein, is a layered video codec in which the redundancy between spatial layers is exploited by inter-layer prediction mechanisms. Three inter-layer prediction techniques are included into the design of the SVC extension of H.264/MPEG-4 AVC: inter-layer motion prediction, inter-layer residual prediction, and inter-layer intra texture prediction.
Block based motion compensated video coding is used in many video compression standards such as H.261, H.263, H264, MPEG-1, MPEG-2, and MPEG-4. The lossy compression process can create visual artifacts in the decoded images, referred to as image artifacts. Blocking artifacts occur along the block boundaries in an image and are caused by the coarse quantization of transform coefficients.
Image filtering techniques can be used to reduce artifacts in reconstructed images. Reconstructed images are the images produced after being inverse transformed and decoded. The rule of thumb in these techniques is that image edges should be preserved while the rest of the image is smoothed. Low pass filters are carefully chosen based on the characteristic of a particular pixel or set of pixels surrounding the image edges.
Non-correlated image pixels that extend across image block boundaries are specifically filtered to reduce blocking artifacts. However, this filtering can introduce blurring artifacts into the image. If there are little or no blocking artifacts between adjacent blocks, then low pass filtering needlessly incorporates blurring into the image while at the same time wasting processing resources.
Previously, only dyadic spatial scalability was addressed by SVC. Dyadic spatial scalability refers to configurations in which the ratio of picture dimensions between two successive spatial layers is a power of 2. New tools that manage configurations in which the ratio of picture dimensions between successive spatial layers is not a power of 2 and in which the pictures of the higher level can contain regions that are not present in corresponding pictures of the lower level, referred to as non-dyadic scaling with cropping window, have been proposed.
All of the inter-layer prediction methods comprise picture up-sampling. Picture up-sampling is the process of generating a higher resolution image from a lower resolution image. Some picture up-sampling processes comprise sample interpolation. The prior up-sampling process used in the SVC design was based on the quarter luma sample interpolation procedure specified in H.264 for inter prediction. When applied to spatially scalable coding, the prior method has the following two drawbacks: the interpolation resolution is limited to quarter samples, and thus, is not supportive of non-dyadic scaling; and half-sample interpolation is required in order to get a quarter-sample position making this method computationally cumbersome. A picture up-sampling process that overcomes these limitations is desired.
SUMMARYEmbodiments of the present invention comprise methods and systems for image encoding and decoding. Some embodiments of the present invention comprise methods and systems for characterization of a block boundary between neighboring blocks within a spatial scalability enhancement layer. In some embodiments, at least one of the neighboring blocks is encoded using inter-layer texture prediction. A block boundary may be characterized with a boundary strength indicator when one of said neighboring blocks meets specified criteria.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention taken in conjunction with the accompanying drawings.
Embodiments of the present invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The figures listed above are expressly incorporated as part of this detailed description.
It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the methods and systems of the present invention is not intended to limit the scope of the invention but it is merely representative of the presently preferred embodiments of the invention.
Elements of embodiments of the present invention may be embodied in hardware, firmware and/or software. While exemplary embodiments revealed herein may only describe one of these forms, it is to be understood that one skilled in the art would be able to effectuate these elements in any of these forms while resting within the scope of the present invention.
Conventional filtering processes consider a single reconstructed image frame at a time. Block based video encoding techniques may use motion vectors to estimate the movement of blocks of pixels. The motion-vector information is available at both the encoder and decoder but is not used with conventional filtering processes. For example, if two adjacent blocks share the same motion vector with respect to the same reference image frame, (for a multiple reference frames system) there is likely no significant difference between the image residuals of each block and accordingly should not be filtered. In essence, adjacent portions of the image have the same motion with respect to the same reference frame and accordingly no significant difference between the image residuals would be expected. In many cases, the block boundary of these two adjacent blocks may have been filtered in the reference frame and should therefore not be filtered again for the current frame. If a deblock filter is used without considering this motion-vector information, the conventional filtering process might filter the same boundary again and again from frame to frame. This unnecessary filtering not only causes unnecessary blurring but also results in additional filter computations.
For example, blocking artifacts 24 exist between blocks 20 and 22. A low pass filter may be used at the boarder 26 between blocks 20 and 22 to remove or otherwise reduce the blocking artifacts 24. The low pass filter, for example, selects a group of pixels 28 from both sides of the boarder 26. An average pixel value, or any other statistical measure, is derived from the group of pixels 28. Then each individual pixel is compared to the average pixel value. Any pixels in group 28 outside of a predetermined range of the average pixel value is then replaced with the average pixel value.
As previously described, if there are few or no blocking artifacts 24 between the adjacent pixels, then the groups of pixels 28 may be needlessly filtered causing blurring in the image. A skip mode filtering scheme may use the motion estimation and/or compensation information for adjacent image blocks as a basis upon which to selectively filter. If the motion estimation and compensation information is sufficiently similar the filtering may be skipped. This avoids unnecessary image blurring and significantly reduces the required number of filtering operations, or any other appropriate value.
As an example, it may be determined during the encoding process that adjacent image blocks 30 and 32 have similar coding parameters. Accordingly, the deblock filtering may be skipped for the groups of pixels 34 that extend across the boarder 31 between adjacent blocks 30 and 32. Skip mode filtering can be used for any horizontal, vertical, or otherwise any boundary between adjacent blocks in the image 12.
A motion vector MV1 points from block 44 in the current image frame 40 to an associated block 44′ in the reference image 42. A motion vector MV2 points from block 46 in the current image frame 40 to an associated block 46′ in the reference frame 42. A skip mode filtering checks to see if the motion vectors MV1 and MV2 point to adjacent blocks in the same reference frame 42. If the motion vectors point to adjacent blocks in the same reference frame (MV1=MV2), then the deblock filtering may be skipped. This motion vector information may be used along with other coding information to decide whether to skip deblock filtering between the two image blocks 44 and 46.
More than one reference frame may be used during the encoding and decoding process. For example, there may be another reference frame 48. The adjacent blocks 44 and 46 may have motion vectors pointing to different reference frames. In one example, the decision to skip deblock filtering depends on whether the motion vectors for the two adjacent blocks point to the same reference frame. For example, image block 44 may have a motion vector 49 pointing to reference frame 48 and image block 46 may have the motion vector MV2 pointing to reference frame 42. The deblock filtering is not skipped in this example because the motion vectors 49 and MV2 point to different reference frames.
The D.C. component 52 refers to a lowest frequency transform coefficient in image block 44. For example, the coefficient that represents the average energy in the image block 44. The A.C. components 53 refer to the transform coefficients that represent the higher frequency components in the image block 44. For example, the transform coefficients that represent the large energy differences between pixels in the image block 44.
In one example, the skip mode filtering may be incorporated into the Telecommunications Sector of the International Telecommunication Union (ITU-T) proposed H.26L encoding scheme. The H.26L scheme uses 4×4 integer Discrete Cosine Transform (DCT) blocks. If desired, only the D.C. component of the two adjacent blocks may be checked. However some limited low frequency A.C. coefficients may likewise be checked, especially when the image blocks are larger sizes, such as 9'9 or 16×16 blocks. For example, the upper D.C. component 52 and the three lower frequency A.C. transform coefficients 53 for block 44″ maybe compared with the upper D.C. component 52 and three lower frequency A.C. transform coefficients 53 for block 46″. Different combinations of D.C. and/or any of the A.C. transform coefficients can be used to identify the relative similarity between the two adjacent blocks 44 and 46.
The processor 54 can also receive other coding parameters 55 that are generated during the coding process. These coding parameters include the motion vectors and reference frame information for the adjacent blocks 44 and 46 as previously described. The processor 54 may use some or all of these coding parameters to determine whether or not to skip deblock filtering between adjacent image blocks 44 and 46. Other encoding and transform functions performed on the image may be carried out in the same processor 54 or in a different processing circuit. In the case where all or most of the coding is done in the same processor, the skip mode is simply enabled by setting a skip parameter in the filtering routine.
The encoding section of the codec 60 reconstructs the transformed and quantized image by first Inverse Quantizing (IQ) the transformed image in box 72. The inverse quantized image is then inverse transformed in box 74 to generate a reconstructed residual image. This reconstructed residual block is then added in box 76 to the reference block 81 to generate a reconstructed image block. Generally the reconstructed image is loop filtered in box 78 to reduce blocking artifacts caused by the quantization and transform process. The filtered image is then buffered in box 80 to form reference frames. The frame buffering in box 80 uses the reconstructed reference frames for motion estimation and compensation. The reference block 81 is compared to the input video block in comparator 64. An encoded image is output at node 71 from the encoding section and is then either stored or transmitted.
In a decoder portion of the codec 60, a variable length decoder (VLD) decodes the encoded image in box 82. The decoded image is inverse quantized in box 84 and inverse transformed in box 86. The reconstructed residual image from box 86 is added in the summing box 88 to the reference block 91 before being loop filtered in box 90 to reduce blocking artifacts and buffered in box 92 as reference frames. The reference block 91 is generated from box 92 according to the received motion vector information. The loop filtered output from box 90 can optionally be post filtered in box 94 to further reduce image artifacts before being displayed as, a video image in box 96. The skip mode filtering scheme can be performed in any combination of the filtering functions in boxes 78, 90 and 94.
The motion estimation and compensation information available during video coding are used to determine when to skip deblock filtering in boxes 78, 90 and/or 94. Since these coding parameters are already generated during the encoding and decoding process, there are no additional coding parameters that have to be generated or transmitted specially for skip mode filtering.
It is then determined whether the residual coefficients for the two adjacent blocks are similar. If there is no significant difference between the image residuals of the adjacent blocks, for example, the two blocks j and k have the same of similar D.C. component (dc(j) dc(k)), then the deblock filtering process in box 104 is skipped. Skip mode filtering then moves to the next interblock boundary in box 106 and conducts the next comparison in decision box 102. Skip mode filtering can be performed for both horizontally adjacent blocks and vertically adjacent blocks.
In one embodiment, only the reference frame and motion vector information for the adjacent image blocks are used to determine block skipping. In another embodiment, only the D.C. and/or A.C. residual coefficients are used to determine block skipping. In another embodiment, the motion vector, reference frame and residual coefficients are all used to determine block skipping.
The skip mode filtering scheme can be applied to spatially subsampled chrominance channels. For example in a case with 4:2:0 color format sequences, skip mode filtering for block boundaries may only rely on the equality of motion vectors and D.C. components for the luminance component of the image. If the motion vectors and the D.C. components are the same, deblock filtering is skipped for both the luminance and chrominance components of the adjacent image blocks. In another embodiment, the motion vectors and the D.C. components are considered separately for each luminance and chrominance component of the adjacent blocks. In this case, a luminance or chrominance component for adjacent blocks may be deblock filtered while the other luminance or chrominance components for the same adjacent blocks are not deblock filtered.
Referring to
In contrast to the block by block manner of filtering, the present inventors came to the realization that filtering determinations should be made in an edge by edge manner together with other information. The other information, may include for example, intra-block encoding of blocks, motion estimation of blocks with residual information, motion estimation of blocks without residual information, and motion estimation of blocks without residuals having sufficient differences. One, two, three, or four of these information characteristics may be used to improved filtering abilities in an edge by edge manner. Based upon different sets of characteristics, the filtering may be modified, as desired.
For each block boundary a control parameter is preferably defined, namely, a boundary strength Bs. Referring to
If both of the blocks j and k are, at least in part, predicted from a previous or future frame, then the blocks j and k are checked at block 114 to determine if any coefficients are coded. The coefficients, may be for example, discrete cosine transform coefficients. If either of the blocks j and k include non-zero coefficients, then at least one of the blocks represent a prediction from a previous or future frame together with modifications to the block using the coefficients, generally referred to as residuals. If either of the blocks j and k include non-zero coefficients (and motion predicted) then the boundary strength is set to two at block 116. This represents an occurrence where the images are predicted but the prediction is corrected using a residual. Accordingly, the images are likely to include blocking artifacts.
If both of the blocks j and k are motion predicted and do not include non-zero coefficients, generally referred to as residuals, then a determination at block 118 is made to check if the pixels on either side of the boundary are sufficiently different from one another. This may likewise be used to determine if the residuals are sufficiently small. If a sufficient difference exists then a blocking artifact is likely to exist. Initially a determination is made to determine if the two blocks use different reference frames, namely, R(j)≠R(k). If the blocks j and k are from two different reference frames then the boundary strength is assigned a value of one at block 120. Alternatively, if the absolute difference of the motion vectors of the two image blocks is checked to determine if they are greater than or equal to 1 pixel in either vertical or horizontal directions, namely, |V(j,x)−V(k,x)|≧1 pixel or |V(j,y)−V(k,y)|≧1 pixel. Other threshold values may likewise be used, as desired, including less than or greater than depending on the test used. If the absolute difference of the motion vectors is greater than or equal to one then the boundary strength is assigned a value of one.
If the two blocks j and k are motion predicted, without residuals, are based upon the same frame, and have insignificant differences, then the boundary strength value is assigned a value of zero. If the boundary strength value is assigned a value of zero the boundary is not filtered or otherwise adaptively filtered accordingly to the value of the boundary strength. It is to be understood that the system may lightly filter if the boundary strength is zero, if desired.
The value of the boundary strength, namely, one, two, and three, is used to control the pixel value adaptation range in the loop filter. If desired, each different boundary strength may be the basis of a different filtering. For example, in some embodiments, three kinds of filters may be used wherein a first filter is used when Bs=1, a second filter is used when Bs=2 and a third filter is used when Bs=3. It is to be understood that non-filtering may be performed by minimal filtering in comparison to other filtering which results in a more significant difference. In the example shown in
Skip mode filtering can be used with any system that encodes or decodes multiple image frames. For example, DVD players, video recorders, or any system that transmits image data over a communications channel, such as over television channels or over the Internet. It is to be understood that the system may use the quantization parameter as a coding parameter, either alone or in combination with other coding parameters. In addition, it is to be understood that the system may be free from using the quantization parameter alone or free from using the quantization parameter at all for purposes of filtering.
The skip mode filtering described above can be implemented with dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware.
For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or described features can be implemented by themselves, or in combination with other operations in either hardware or software.
In some embodiments of the present invention as illustrated in
In some embodiments of the present invention, as illustrated in
The encoded boundary strength may be read from the storage media 1006 and decoded by B's data decoding portion 1014 to input the decoded boundary strength to image data decoding apparatus 1008. When the decoded boundary strength is utilized in image data decoding apparatus 1008 to perform the adaptive filtering of the present invention, it may not be necessary to repeat the process described in
In some embodiments of the present invention, as illustrated in
In some embodiments of the present invention, as illustrated in
The encoded boundary strength may be received from the network 1206 and decoded by B's data decoding portion 1214 to input the decoded boundary strength to image data decoding apparatus 1208 to perform the adaptive filtering of the present invention, it may not be necessary to repeat the process described in
Some embodiments of the present invention may be described with reference to
In some embodiments of the present invention, as shown in
Other embodiments of the present invention, as shown in
Further motion vector parameters may be used to determine filtering. In embodiments illustrated in
Further embodiments of the present invention may utilize transform coefficients to determine whether deblock filtering should occur. In reference to
Motion vectors are then compared 184 to determine similarity. If the motion vectors are not similar, deblock filtering may be performed 186. If the motion vectors are similar, the motion vector data is analyzed to determine whether the motion vectors point to the same reference frame. If the motion vectors do not point to the same reference frame 185, filtering may proceed 186.
If the motion vectors point to the same reference frame 185, transform coefficients may be compared to further qualify filtering processes. In this example, DC transform coefficients obtained through Discrete Cosine Transform (DCT) methods or other methods may be compared for the adjacent blocks. If the DC transform coefficients are not similar 187, deblock filtering may be performed 186. If the DC transform coefficients are similar, filtering may be skipped and the methods and systems may proceed to the next step 188.
Still other embodiments of the present invention may utilize AC transform coefficients to determine filtering options. In reference to
AC transform coefficients are more likely to have significance in larger blocks, but can be used in methods utilizing smaller blocks such as 4×4 blocks.
In some embodiments of the present invention, an image may be separated into various luminance and chrominance channels depending on the format of the image and the color space utilized. In the following examples, a YUV color space is described, however, many other formats and color spaces may be used in these embodiments. CieLAB, YcrCb and other spaces may be used. In alternative embodiments color spaces such as RGB may be used.
Some embodiments of the present invention may be described in relation to
In other related embodiments, illustrated in
Images may be further divided into component channels that generally correspond to luminance and chrominance channels. In some embodiments of the present invention, each channel may be filtered according to parameters unique to that channel.
As an example, embodiments may be described with reference to
As in other embodiments, these channelized embodiments may utilize transform coefficient data to qualify filtering options. As shown in
It should be noted that various combinations of parameters may be employed in qualifying filtering operations in each channel. DC and AC transform coefficients may be utilized for these embodiments. Furthermore, various channels and combinations of channels may be used to determine filtering options and perform filtering. For example, both chrominance channels may be combined and analyzed together in some embodiments. Data and parameters from one channel may also be used to determine filtering options in another channel. For example, parameters taken from the U chrominance channel may be compared to determine filtering options in the V chrominance channel and vice versa.
Some embodiments of the present invention relate to the Scalable Video Coding Extension of H.264/AVC. Some embodiments relate to filtering to address a problem of picture upsampling for spatial scalable video coding. More specifically, some embodiments of the present invention provide an upsampling procedure that is designed for the Scalable Video Coding extension of H.264/MPEG-4 AVC, especially for the Extended Spatial Scalable (ESS) video coding feature adopted in April 2005 by JVT (Joint Video Team of MPEG and VCEG).
Currently, JSVM WD-1.0 [MPEG Doc. N6901], which is incorporated by reference herein, only addresses dyadic spatial scalability, that is, configurations where the ratio between picture width and height (in terms of number of pixels) of two successive spatial layers equals 2. This obviously will be a limitation on more general applications, such as SD to HD scalability for broadcasting.
A tool has been proposed,[MPEG Doc. m11669], which is incorporated by reference herein, that provides extended spatial scalability, that is, managing configurations in which the ratio between picture width and height of two successive spatial layers is not necessarily equal to a power of 2 and pictures of a higher level can contain regions (typically around picture borders) that are not present in corresponding pictures of a lower level. This proposal [MPEG Doc. m11669] extended inter-layer prediction of WD-1.0 [MPEG Doc. N6901] for more generic cases where the ratio between the higher layer and lower layer picture dimensions is not a power of 2.
Embodiments of the present invention provide a method that applies the extended spatial scalability, i.e., non-dyadic scaling with cropping window, to picture level that will better fit the need of more general applications. To support the picture-level adaptation of spatial scalability, embodiments of the present invention provide a further refinement of the inter-layer prediction method heretofore proposed. Additionally, several issues that were not addressed by the prior proposal are also addressed in these embodiments.
For the purposes of this specification and claims, the term “picture” may comprise an array of pixels, a digital image, a subdivision of a digital image, a data channel of a digital image or another representation of image data.
Embodiments of the present invention relate to two or more successive spatial layers, a lower layer (considered as base layer) 253 and a higher layer (considered as enhancement layer) 251. These layers may be linked by the following geometrical relations (shown in
A problem addressed by embodiments of the present invention is the encoding/decoding of macroblocks of the enhancement layer knowing the decoded base layer. A macroblock of an enhancement layer may have either no base layer corresponding block (on borders of the enhancement layer picture) or one to several base layer corresponding macroblocks, as illustrated in
It has been proposed that [MPEG Doc. m11669], wextract and hextract be constrained to be a multiple of 16. This constraint limits the picture-level adaptation. Instead, embodiments of the present invention restrict wextract and hextract to be a multiple of 2. Embodiments of the present invention may further require xorig and yorig to be a multiple of 2 in order to avoid the complexity in adjusting for possible phase shift in chroma up/down sampling. The chromaphase shift problem has not been previously addressed.
The dimensions and other parameters illustrated in
scaled_base_left_offset=xorig
scaled_base_top_offset=yorig
scaled_base_right_offset=wenh−xorig−wextract
scaled_base_bottom_offset=henh−yorig−hextract
scaled_base_width=Wextract
scaled_base_height=hextract
Inter-Layer Motion PredictionA given high layer macroblock can exploit inter-layer prediction using scaled base layer motion data using either “BASE_LAYER_MODE” or “QPEL_REFINEMENT_MODE”. As in WD-1.0 [MPEG Doc. N6901], these macroblock modes indicate that the motion/prediction information including macroblock partitioning is directly derived from the base layer. A prediction macroblock, MB_pred, can be constructed by inheriting motion data from a base layer. When using “BASE_LAYER_MODE”, the macroblock partitioning, as well as the reference indices and motion vectors, are those of the prediction macroblock MB_pred. “QPEL_REFINEMENT_MODE” is similar, but with a quarter-sample motion vector refinement.
It has been proposed to derive MB_pred in the following four steps:
for each 4'4 block of MB_pred, inheritance of motion data from the base layer motion data,
partitioning choice for each 8×8 block of MB_pred,
mode choice for MB_pred, and
motion vector scaling.
However, embodiments of the present invention provide modifications in several equations to support picture-level adaptation.
4×4 Block Inheritance
The co-located macroblock of pixel (x, y) is then the base layer macroblock that contains pixel (xbase, ybase). In the same way, the co-located 8×8 block of pixel (x, y) is the base layer 8×8 block containing pixel (xbase, ybase) and the co-located 4×4 block of pixel (x, y) is the base layer 4×4 block containing pixel (xbase, ybase).
The motion data inheritance process for b may be described as follows:
for each corner c, the reference index r(c,listx) and motion vector mv(c,listx) of each list listx (listx=list0 or list1) are set to those of the co-located base layer 4×4 block
for each corner, if the co-located macroblock does not exist or is in intra mode, then b is set as an intra block
else, for each list listx
-
- if none of the corners uses this list, no reference index and motion vector for this list is set to b
- else
- the reference index rb(listx) set for b is the minimum of the existing reference indices of the 4 corners:
-
-
- the motion vector mvb(listx) set for b is the mean of existing motion vectors of the 4 corners, having the reference index rb(listx).
-
Once each 4×4 block motion data has been set, a merging process is necessary in order to determine the actual partitioning of the 8×8 block it belongs to and to avoid forbidden configurations. In the following, 4×4 blocks of an 8×8 block are identified as indicated in
For each 8×8 block B, the following process may be applied:
if the 4 4×4 blocks have been classified as intra blocks, B is considered as an intra block.
else, B partitioning choice is achieved:
-
- The following process for assigning the same reference indices to each 4×4 block is applied: for each list listx
- if no 4×4 block uses this list, no reference index and motion vector of this list are set to B
- else
- reference index rB(listx) for B is computed as the minimum of the existing reference indices of the 4 4×4 blocks:
- The following process for assigning the same reference indices to each 4×4 block is applied: for each list listx
-
-
-
- mean motion vector mvmean(listx) of the 4×4 blocks having the same reference index rB(listx) is computed
- 4×4 blocks (1) classified as intra blocks or (2) not using this list or (3) having a reference index rb(listx) different from rB(listx) are enforced to have rB(listx) and mvmean(listx) as reference index and motion vector.
-
- Then the choice of the partitioning mode for B is achieved. Two 4×4 blocks are considered as identical if their motion vectors are identical. The merging process is applied as follows:
- if b1 is identical to b2 and b3 is identical to b4 then
- if b1 is identical to b3 then BLK—8×8 is chosen
- else BLK—8×4 is chosen
- else if b1 is identical to b3 and b2 is identical to b4 then BLK—4×8 is chosen
- else BLK—4×4 is chosen
- if b1 is identical to b2 and b3 is identical to b4 then
-
In some embodiments, a process may be achieved to determine an MB_pred mode. In the following, 8×8 blocks 301-304 of the macroblock 300 are identified as indicated in
Two 8×8 blocks are considered as identical blocks if:
One or both of the two 8×8 blocks are classified as intra blocks or
Partitioning mode of both blocks is BLK—8×8 and reference indices and motion vectors of list0 and list1 of each 8×8 block, if they exist, are identical.
The mode choice is done using the following process:
if all 8×8 blocks are classified as intra blocks, then MB_pred is classified as INTRA macroblock
else, MB_pred is an INTER macroblock. Its mode choice is achieved as follows:
-
- 8×8 blocks classified as intra are enforced to BLK—8×8 partitioning. Their reference indices and motion vectors are computed as follows. Let BINTRA be such a 8×8 block.
- for each list listx
- if no 8×8 block uses this list, no reference index and motion vector of this list is assigned to BINTRA
- else, the following steps are applied:
- a reference index rmin(listx) is computed as the minimum of the existing reference indices of the 8×8 blocks:
-
-
-
-
- a mean motion vector mvmean(listx) of the 4×4 blocks having the same reference index rmin(listx) is computed
- rmin(listx) is assigned to BINTRA and each 4×4 block of BINTRA is enforced to have rmin(listx) and mvmean(listx) as reference index and motion vector.
-
-
- Then the choice of the partitioning mode for B is achieved. Two 8×8 blocks are considered as identical if their Partitioning mode is BLK—8×8 and reference indices and motion vectors of list0 and list1 of each 8×8 block, if they exist, are identical. The merging process is applied as follows:
- if B1 is identical to B2 and B3 is identical to B4 then
- if B1 is identical to B3 then MODE—16×16 is chosen.
- else MODE—16×8 is chosen.
- else if B1 is identical to B3 and B2 is identical to B4 then MODE—8×16 is chosen.
- else MODE—8×8 is chosen.
- if B1 is identical to B2 and B3 is identical to B4 then
-
A motion vector rescaling may be applied to every existing motion vector of the prediction macroblock MB_pred as derived above. A Motion vector mv=(dx, dy) may be scaled in the vector mvs=(dsx, dsy) using the following equations:
in which sign[x] is equal to 1 when x is positive, (−1) when x is negative, and 0 when x equals 0. The symbols with subscript “r” represent the geometrical parameters of the corresponding reference picture.
Inter-Layer Texture Prediction Texture UpsamplingIn some embodiments of the present invention, inter layer texture prediction may be based on the same principles as inter layer motion prediction. Base layer texture upsampling may be achieved applying the two-lobed or three-lobed Lanczos-windowed sinc functions. These filters are considered to offer the best compromise in terms of reduction of aliasing, sharpness, and minimal ringing. The two-lobed Lanczos-windowed sinc function may be defined as follows:
This upsampling step may be processed either on the full frame or block by block. For Intra texture prediction, repetitive padding is used at frame boundaries. For residual prediction, repetitive padding is used at block boundaries (4×4 or 8×8 depending on the transform).
In an exemplary embodiment, according to the Lanczos2 function, the following 16 4-tap upsampling filters are defined in Table 1 below for the 16 different interpolation phases in units of one-sixteenth sample spacing relative to the sample grid of corresponding component in the base layer picture.
For a luma sample in the current layer at position (x, y), the phase shift relative to the corresponding samples in the base layer picture shall be derived as:
For a chroma sample in the current layer at position (xc, yc) in the chroma sample coordinate system, the phase shift relative to the corresponding samples in the base layer picture may be derived as:
in which
wbase,c=wbase·BasePicMbWidthC/16 (9)
wextract,c=wextract·MbWidthC/16 (10)
hbase,c=hbase·BasePicMbHeightC/16 (11)
hextract,c=hextract·MbHeightC/16 (12)
xorig,c=xorig·MbWidthC/16 (13)
pi yorig,c=yorig·MbHeightC/16
According to each phase shift derived, a 4-tap filter can be chosen from Table 1 for interpolation.
Inter-Layer Intra Texture PredictionIn WD-1.0 [MPEG Doc. N6901], the I_BL mode requires all the corresponding base-layer macroblocks to be intra-coded. In embodiments of the present invention the requirement may be relaxed to allow that the corresponding base-layer macroblocks be inter-coded or not-existing.
For generating the intra prediction signal for macroblocks coded in I_BL mode, the co-located blocks (if any) of the base layer signals are directly de-blocked and interpolated. For 4 input samples (X[n−1], X[n], X[n+1], X[n+2]), the output value Y of a 4-tap interpolation filter shall be derived as:
Y=Clip1Y((e[−1]X[n−1]+e[0]X[n]+e[1]X[n+1]+e[2]X[n+2]+64)/128) (15)
with
Clip1Y(x)=min(max (0, x), (1<<BitDepthY)−1)
in which BitDepthY represents the bit depth of the luma channel data, for luma sample, or
Y=Clip1C((e[−1]X[n−1]+e[0]X[n]+e[1]X[n+1]+e[2]X[n+2]+64)/128) (16)
with
Clip1C(x)=min(max (0, x), (1<<BitDepth)−1)
in which BitDepthC represents the bit depth of the chroma channel data, for Chroma sample.
Because rounding operations are applied in Equations 15 and 16, the filtering order may be specified as horizontally first or vertically first. It is recommended that filter operations are performed in the horizontal direction first and then followed by filter operations in the vertical direction. This upsampling process is invoked only when extended_spatial_scalability, defined below, is enabled.
After the upsampling filter operation, constant values shall be used to fill the image regions outside of the cropping window. The constant shall be (1<<(BitDepthY—1)) for luma or (1<<(BitDepthC−1)) for chroma.
Inter-Layer Residual PredictionSimilar to Inter-Layer Intra Texture Prediction, the same 4-tap filters, or other filters, may be applied when upsampling the base layer residuals, but with different rounding and clipping functions from that in Equations 15 and 16.
For 4 input residual samples (X[n−1], X[n], X[n+1], X[n+2]), the output value Y of the filter shall be derived as:
Y=Clip1Y,r(e[−1]X[n−1]+e[0]X[n]+e[1]X[n+1]+e[2]X[n+2])/128) (17)
for luma residual sample, or
Y=Clip1C,r((e[−1]X[n−1]+e[0]X[n]+e[1]X[n+1]+e[2]X[n+2])/128) (18)
for Chroma residual sample.
The clipping functions for residual upsampling are defined as:
Clip1Y,r(x)=Clip3(1−(1<<BitDepthY), (1<<BitDepthY)−1,x) (19)
Clip1C,r(x)=Clip3(1−(1<<BitDepthC), (1<<BitDepthC)−1,x) (20)
where Clip3(a, b, x)=min(max(a,x), b).
Similarly, after the upsampling filter operation, constant values shall be used to fill the pixel positions where residual prediction is not available, including image regions outside of the cropping window. The constant shall be 0 for all color components.
Changes in Syntax and Semantics Syntax in Tabular FormEmbodiments of the present invention may utilize the following changes are indicated below in large bold text. The main changes are the addition in the sequence parameter set of a symbol, extendeds_spatial_scalability, and accordingly four parameters:
scaled_base_left_offset_divided_by_two,
scaled_base_top_offset_divided_by_two,
scaled_base_right_offset_divided_by_two,
scaled_base_bottom_offset_divided_by_two
in sequence parameter set and slice_data_in_scalable_extension( ) related to the geometrical transformation to be applied in the base layer upsampling process.
Sequence parameter set syntax in scalable extension
extended_spatial_scalability specifies the presence of syntax elements related to geometrical parameters for the base layer upsampling. When extended_spatial_scalability is equal to 0, no geometrical parameter is present in the bitstream. When extended_spatial_scalability is equal to 1, geometrical parameters are present in the sequence parameter set. When extended_spatial_scalability is equal to 2, geometrical parameters are present in slice_data_in_scalable_extension. The value of 3 is reserved for extended_spatial_scalability. When extended_spatial_scalability is not present, it shall be inferred to be equal to 0.
scaled_base_left_offset_divided_by_two specifies half of the horizontal offset between the upper-left pixel of the upsampled base layer picture and the upper-left pixel of the current picture. When scaled_base_left_offset_divided_by_two is not present, it shall be inferred to be equal to 0.
scaled_base_top_offset_divided_by_two specifies half of the vertical offset of the upper-left pixel of the upsampled base layer picture and the upper-left pixel of the current picture. When scaled_base_top_offset_divided_by_two is not present, it shall be inferred to be equal to 0.
scaled_base_right_offset_divided_by_two specifies half of the horizontal offset between the bottom-right pixel of the upsampled based layer picture and the bottom-right pixel of the current picture. When scaled_base_right_offset_divided_by_two is not present, it shall be inferred to be equal to 0.
scaled_base_bottom_offset_divided_by_two specifies half of the vertical offset between the bottom-right pixel of the upsampled based layer picture and the bottom-right pixel of the current picture. When scaled_base_bottom_offset_divided_by_two is not present, it shall be inferred to be equal to 0.
All geometrical parameters are specified as unsigned integer in units of one-sample spacing relative to the luma sampling grid in the current layer. Several additional symbols (scaled_base_left_offset, scaled_base_top_offset, scaled_base_right_offset, scaled_base_bottom_offset, scaled_base_width, scaled_base_height) are then defined based on the geometrical parameters:
scaled_base_left_offset=2·scaled_base_left_offset_divided_by_two
scaled_base_top_offset=2·scaled_base_top_offset_divided_by_two
scaled_base_right_offset=2·scaled_base_right_offset_divided_by_two
scaled_base_bottom_offset=2·scaled_base_bottom_offset_divided_by_two
scaled_base_width=PicWidthInMbs·16−scaled_base_left_offset−scaled_base_right_offset
scaled_base_height=PicHeightInMapUnits·16−scaled_base_top_offset−scaled_base_bottom_offset
Slice Data Syntax in Scalable ExtensionSemantics of the syntax elements in the slice data are identical to that of the same syntax elements in the sequence parameter set.
Decoding Process Decoding Process for Prediction DataCompared to WD-1.0 [MPEG Doc. N6901], the following processes must be added. For each macroblock, the following applies:
If extended_spatial_scalability is equal to 1 or 2 and base_layer_mode_flag is equal to 1, the motion vector field including the macroblock partitioning is derived using the process described in Section 3. As in WD-1.0 [MPEG Doc. N6901], if all corresponding base-layer macroblocks are intra-coded, the current macroblock mode is set to I_BL.
else, if extended_spatial_scalability is equal to 1 or 2 and base_layer_mode_flag is equal to 0 but base_layer_refinement is equal to 1, the base layer refinement mode is signaled. The base layer refinement mode is similar to the base layer prediction mode. The macroblock partitioning as well as the reference indices and motion vectors are derived following Section 3. However, for each motion vector a quarter-sample motion vector refinement mvd_ref IX (−1, 0, or +1 for each motion vector component) is additionally transmitted and added to the derived motion vectors. The rest of the process is identical as in WD-1.0 [MPEG Doc. N6901].
Decoding Process for Subband PicturesCompared to WD-1.0 [MPEG Doc. N6901], the following processes must be added:
If extended_spatial_scalability is equal to 1 or 2, intra prediction signal for an MB in I_BL mode is generated by the following process.
The collocated base layer blocks/macroblocks are filtered.
The intra prediction signal is generated by interpolating the deblocked. The interpolation is performed using process described in Section 4. The rest of the process is identical as in WD-1.0 [MPEG Doc. N6901].
Otherwise, if extended_spatial_scalability is equal to 1 or 2, and residual_prediction_flag is equal to 1, the following applies.
-
- The residual signal of the base layer blocks is upsampled and added to the residual signal of the current macroblock. The interpolation is performed using process described in Section 4.
When extended_spatial_scalability is equal to 1 or 2, a minor change should apply to the loop filter during filter strength decision for a block in I_BL mode.
If the neighboring block is intra-coded but not in I_BL mode, the Bs is 4 (this first part is as same as in WD-1.0 [MPEG Doc. N6901]).
Otherwise, if any of the adjacent blocks has coefficient, the Bs is 2.
Otherwise, if the neighboring block is not in I_BL mode, the Bs is 1.
Otherwise, Bs is 0.
6-Tap Filter EmbodimentsSome embodiments of the present invention are designed for use with the Scalable Video Coding extension of H.264/MPEG-4 AVC, especially for the Extended Spatial Scalable (ESS) video coding feature adopted in April 2005 by JVT (Joint Video Team of MPEG and VCEG).
In the current SVC design, the upsampling process is based on the quarter luma sample interpolation procedure that is specified in H.264 for inter prediction. The method inherits two drawbacks when applied to spatial scalable coding: (1) the interpolation resolution is limited to quarter samples, and (2) the half sample interpolation must be performed in order to get to a quarter sample position.
Some embodiments of the present invention remove these drawbacks by (1) finer interpolation resolution, and (2) direct interpolation. Consequently, these embodiments reduce the computational complexity while improving the quality of the up-sampled pictures.
The upsampling technique of exemplary embodiments of the present invention is based on direct interpolation with 16 6-tap filters. The filter selection is according to the interpolation positions or phases, ranging from 0 to 15 in units of one-sixteenth picture samples. The set of filters are designed to be backward compatible with the half sample interpolation process of SVC and the half sample luma inter prediction of H.264. Therefore, the technique of these embodiments can be a natural extension of H.264 from hardware/software implementation point of view.
Conventional spatial scalable video coding systems typically deal with cases in which spatial or resolution scaling-factor is 2 or a power of 2. In April 2005, Extended Spatial Scalability was adopted into SVC Joint Scalable Video Model (JSVM) to handle more generic applications in which spatial scaling factor is not limited to the power of 2. The upsampling procedure for inter-layer texture prediction, however, is still a developing issue. During the JVT meeting in April 2005, a decision was made to temporarily adopt the quarter luma sample interpolation process specified in H.264 for texture upsampling.
In these embodiments of the present invention, the same geometric relationships that were described for the above-described embodiments in relation to
In above-described embodiments, a set of 16 4-tap upsampling filters were defined for the 16 different interpolation phases in units of one-sixteenth sample spacing relative to the integer sample grid of corresponding component in the base layer picture. The 4-tap filters, however, are not backward compatible to the earlier H.264 design. Consequently, these embodiments may comprise a new set of 16 6-tap filters and corresponding filtering procedures. In an exemplary embodiment, the 6-tap filters described in Table 2 may be used. In another exemplary embodiment, the 6-tap filters described in Table 3 may be used.
Given a luma sample position (x, y) in the enhancement picture in units of integer luma samples, its corresponding position in the base picture (px,L, py,L) in units of one-sixteenth luma samples of the base picture can be derived as
in which RL=16 (for one-sixteenth-sample resolution interpolation), as in
Similarly, given a chroma sample position (xc, yc) in the enhancement picture in units of single chroma samples, its corresponding position in the base picture (px,c, py,c) in units of one-sixteenth chroma samples of the base picture can be derived as
in which RC=16, (xorig,c, Yorig,c) represents the position of the upper-left corner of the cropping window in the current picture in units of single chroma samples of current picture, (wbase,c, hbase,c) is the resolution of the base picture in units of single chroma samples of the base picture, (wextract,c, hextract,c) is the resolution of the cropping window in units of the single chroma samples of current picture, (pbased,x, pbase,y) represents the relative chroma phase shift of the base picture in units of quarter chroma samples of the base picture, and (penh,x, penh,y) represents the relative chroma phase shift of the current picture in units of quarter chroma samples of the current picture.
A 6-tap filter can be selected from Table 2 or Table 3 based on the interpolation positions derived by Eqs. 21 and 22. In some embodiments, when the interpolation position is a half sample position, the filter is as same as that in H.264 defined for half luma sample interpolation. Therefore, the similar hardware/software modules can be applied for the technique of these embodiments of the present invention.
For inter-layer residual upsampling, similar direct interpolation methods can be used, however, with the bilinear interpolation filters instead of the 6-tap filters for texture upsampling or the 4-tap filters described above.
In some exemplary embodiments, an interpolation process is as follows.
1. Define position (xP, yP) for the upper-left luma sample of a macroblock in the enhancement picture. When chroma_format_idc is not equal to 0, i.e., the chroma channels exist, define position (xC, yC) for the upper-left chroma samples of the same macroblock.
2. Derive the relative location of the macroblock in the base-layer picture,
and when chroma_format_idc is not equal to 0,
in which MbWidthC and MbHeightC represent the number of chroma samples per MB in horizontal and vertical directions, respectively.
3. Texture Interpolation Process
Inputs to this Process Include
integer luma sample positions in base picture (xB, yB) and (xB1, yB1)
a luma sample array for the base picture based [x, y] with x=−2+xB . . . (xB1+2) and y=−2+yB . . . (yB1+2)
when chroma_format_idc is not equal to 0,
-
- integer chroma sample positions in base picture (xCB, yCB) and (xCB1, yCB1)
- two chroma sample arrays for the base picture baseCb[x, y] and baseCr[x, y] with x=−2+xCB . . . (xCB1+2) and y=−2+yCB . . . (yCB1+2)
Outputs of This Process Include
a luma sample macroblock array predL[x, y] with x=0 . . . 15 and y=0 . . . 15
when chroma_format_idc is not equal to 0, two chroma sample macroblock arrays predCb[x, y] and predCr[x, y] with x=0 . . . MbWidthC−1 and y=0 . . . MbHeightC−1
The luma samples predL[x, y] with x=0 . . . 15 and y=0 . . . 15 are derived as follows.
Let tempL[x, y] with x=−2+xB . . . (xB1+2) and y=0 . . . 15 be a temporary luma sample array.
Each tempL[x, y] with x=−2+xB . . . (xB1+2) and y=0 . . . 15 is derived as follows
-
- The corresponding fractional-sample position yf in base layer is derived as follows.
yf=Py,L(y+yP)
-
- Let yInt and yFrac be defined as follows
yInt=(yf>>4)
yFrac=yf% 16
-
- Select a six-tap filter e[j] with j=−2 . . 3 from Table 2 using yFrac as phase, and derive tempL[x, y] as
tempL[x, y]=baseL[x, yInt−2]*e[−2]+baseL[x, yInt−1]*e[−1]+baseL[x, yInt]*e[0]+baseL[x, yInt+1]*e[1]+baseL[x, yInt+2]*e[2]+baseL[x, yInt+3]*e[3]
Each sample predL[x, y] with x=0 . . . 15 and y=0 . . . 15 is derived as follows.
-
- The corresponding fractional-sample position xf in base layer is derived as follows.
xf=px,L(x+xP)
-
- Let xInt and xFrac be defined as follows
xInt=(xf>>4)
xFrac=xf% 16
-
- Select a six-tap filter e[j] with j=−2 . . . 3 from Table 2 using xFrac as phase, and derive predL[x, y] as
predL[x, y]=Clip1Y((tempL[xInt−2, y]*e[−2]+tempL[xInt−1, y]*e[1]+tempL[xInt, y]*e[0]+tempL[xInt+1, y]*e[1]+tempL[xInt+2, y]*e[2]+tempL[xInt+3, y]*e[3]+512)/1024)
When chroma_format_idc is not equal to 0, the chroma samples predC[x, y] (with C being Cb or Cr) with x=0 . . . MbWidthC−1, y=0 . . . MbHeightC−1 are derived as follows.
Let tmp1Cb[x, y] and tmp1Cr[x, y] with x=−2+xCB . . . (xCB1+2) and y=0 . . . MbHeightC−1 be temporary chroma sample arrays.
Each tempC[x, y] with C as Cb and Cr, x=−2+xCB . . . (xCB1+2), and y=0 . . . MbHeightC−1 is derived as follows
-
- The corresponding fractional-sample position yfC in base layer is derived as follows.
yfC=py,C(y+yC)
-
- Let yIntC and yFracC be defined as follows
yIntC=(yfC>>4)
yFracC=yfC% 16
Select a six-tap filter e[j] with j=−2 . . . 3 from Table 2 using yFracC as phase, and derive tempC[x, y] as
tempC[x, y]=baseC[x, yIntC−2]*e[−2]+baseC[x, yIntC−1]*e[−1]+baseC[x, yIntC]*e[0]+baseC[x, yIntC+1]*e[1]+baseC[x, yIntC+2]*e[2]+baseC[x, yIntC+3]*e[3]
Each sample predC[x, y] with C as Cb and Cr, x=0 . . . MbWidthC−1 and y=0 . . . MbHeightC−1 is derived as follows.
-
- The corresponding fractional-sample position xfC in base layer is derived as follows.
xfC=px,C(x+xC)
-
- Let xIntC and xFracC be defined as follows
xIntC=(xfC>>4)
xFracC=xfC% 16
-
- Select a six-tap filter e[j] with j=−2 . . . 3 from Table 2 using xFracC as phase, and derive predC[x, y] as
predC[x, y]=Clip1C((tempC[xIntC−2, y]*e[−2]+tempC[xIntC−1, y]*e[−1]+tempC[IntC, y]*e[0]+tempC[xIntC+1, y]*e[1]+tempC[xIntC+2, y]*e[2]+tempC[xIntC+3, y]*e[3]+512)/1024)
4. Texture Interpolation Process
Inputs to This Process Include
integer luma sample positions in basePic (xB, yB) and (xB1, yB1)
a luma residual sample array resBaseL[x, y] with x=−xB . . . xB1 and y=yB . . . yB1
when chroma_format_idc is not equal to 0,
-
- integer chroma sample positions in basePic (xCB, yCB) and (xCB1, yCB1)
- two chroma residual sample arrays resBaseCb[x, y] and resBaseCr[x, y] with x=xCB . . . xCB1 and y=yCB . . . yCB1
Outputs of This Process Include
a luma sample array resPredL[x, y] with x=0 . . . 15 and y=0 . . . 15
when chroma_format_idc is not equal to 0, two chroma sample arrays resPredCb[x, y] and resPredcCr[x, y] with x=0 . . . MbWidthC−1 and y=0 . . . MbHeightC−1
The luma residual samples resPredL[x, y] with x=0 . . . 15 and y=0 . . . 15 are derived as follows.
Let tempL[x, y] with x=xB . . . xB1 and y=0 . . . 15 be a temporary luma sample array.
Each tempL[x, y] with x=−xB . . . xB1 and y=0 . . . 15 is derived as follows
-
- The corresponding fractional-sample position yf in base layer is derived as follows.
Yf=Py,L(y+yP)
-
- Let yInt and yFrac be defined as follows
yInt=(yf>>4)
yFrac=yf% 16
-
- Derive tempL[x, y] as
tempL[x, y]=resBaseL[x, yInt]*(16−yFrac)+resBaseL[x, yInt+1]*yFrac
Each residual sample resPredL[x, y] with x=0 . . . 15 and y=0 . . . 15 is derived as follows.
-
- The corresponding fractional-sample position xf in base layer is derived as follows.
xf=px,L(x+xP)
-
- Let xInt and xFrac be defined as follows
xInt=(xf>>4)
xFrac=xf% 16
-
- Derive resPredL[x, y] as
resPredL[x, y]=Clip1Y,r((tempL[xInt, y]*(16−xFrac)+tempL[xInt+1, y]*xFrac)/256)
with
-
-
- Clip1Y,r(x)=Clip3(1−(1<<BitDepthY), (1<<BitDepthY)−1, x) in which BitDepthY represents the bit depth of the luma channel data.
-
When chroma_format_idc is not equal to 0, the chroma residual samples resPredC[x, y] (with C being Cb or Cr) with x=0 . . . MbWidthC−1, y=0 . . . MbHeightC−1 are derived as follows.
Let tmp1Cb[x, y] and tmp1Cr[x, y] with x=xCB . . . xCB1 and y=0 . . . MbHeightC−1 be temporary chroma sample arrays.
Each tempC[x, y] with C as Cb and Cr, x=−xCB . . . xCB1, and y=0 . . . MbHeightC−1 is derived as follows
-
- The corresponding fractional-sample position yfC in base layer is derived as follows.
yfC=py,C(y+yC)
-
- Let yIntC and yFracC be defined as follows
yIntC=(yfC>>4)
yFracC=yfC% 16
-
- Derive tempC[x, y] as
tempC[x, y]=resBaseC[x, yIntC]*(16−yFracC)+resBaseC[x, yIntC+1]*yFracC
Each sample resPredC[x, y] with C as Cb and Cr, x=0 . . . MbWidthC−1 and y=0 . . . MbHeight−1 is derived as follows.
-
- The corresponding fractional-sample position xfC in base layer is derived as follows.
xfC=px,C(x+xC)
-
- Let xIntC and xFracC be defined as follows
xIntC=(xfC>>4)
xFracC=xfC% 16
-
- Derive resPredC[x, y] as
resPredC[x, y]=Clip1C,r(tempC[xIntC, y]*(16−xFracC)+tempC[xIntC+1, y]*xFracC)/256)
with
-
-
- Clip1C,r(x)=Clip3(1−(1<<BitDepthC), (1<<BitDepthC)−1, x) in which BitDepthC represents the bit depth of the chroma channel data.
-
Some embodiments of the present invention comprise a deblocking filter for spatial scalable video coding. In some of these embodiments the filtering method is designed for the Scalable Video Coding (SVC) extension of H.264/MPEG-4 AVC, especially for the Extended Spatial Scalable (ESS) video coding feature adopted in April 2005 by JVT (Joint
Video Team of MPEG and VCEG).
In prior methods, the filtering process was identical across all layers with possibly various spatial resolutions. A block coded using inter-layer texture prediction was considered as an intra-coded block during the filtering process. This prior method has two drawbacks when being applied to spatial scalable coding: (1) the prediction from a lower resolution layer can be unnecessarily blurred and therefore (2) the process unnecessarily spends more computational cycles.
Embodiments of the present invention may remove both of these drawbacks by skipping filter operations for some block boundaries, by applying different filters to different block boundaries, by varying the aggressiveness of a filter on different block boundaries or by otherwise adjusting filter characteristics for specific block boundaries. As a result, these embodiments reduce the computational complexity and improve the quality of the up-sampled pictures.
In these embodiments, we consider the blocks coded using inter-layer texture prediction as Inter blocks so the filtering decisions in the existing AVC design for the inter blocks are applied. In some embodiments, the adaptive block boundary filtering described above in relation to adjacent blocks with non-spatially-scalable coding may be applied to spatial scalable coding. These methods, adopted into H.264, may be applied to spatial scalable video coding.
In some embodiments of the present invention, a deblocking filter for an image block boundary can be characterized by a control parameter Boundary Strength (Bs), which may have a value in the range of 0 to 4 or some other range. The higher the Bs value, the stronger the filter operation applied to the corresponding boundary. When Bs is equal to 0, the filter operation may be skipped or minimized.
In the current SVC design, a macroblock prediction mode based on inter-layer texture prediction is called I_BL mode. Using prior methods, all block boundaries related to an I_BL macroblock had to be filtered, i.e., with Bs>0 for all block boundaries.
Embodiments of the present invention comprise a filter strength decision method for a block in I_BL mode for the spatial scalable coding, i.e., when the symbol in SVC SpatialScalabilityType is not equal to 0. The purpose is to reduce the computational complexity and avoid blurring the prediction from the base layer.
In some embodiments, for a block in I_BL mode, the Bs of a boundary between the block and a neighboring block may be derived as follows
1. If the neighboring block is intra-coded but not in I_BL mode, the Bs is 4.
2. Otherwise, if any of the adjacent blocks has a non-zero coefficient, the Bs is 2.
3. Otherwise, if the neighboring block is not in I_BL mode based on the same base layer picture, the Bs is 1.
4. Otherwise, Bs is 0.
In embodiments of the present invention related to the SVC extension of the JVT, if SpatialScalabilityType is not equal to 0 and either luma sample p0 or q0 is in macroblocks coded using the I_BL macroblock prediction mode, the variable bS is derived as follows:
If either luma samples p0 or q0 is in a macroblock coded using an intra prediction mode other than the I_BL mode, a value of bS equal to 4 shall be the output;
Otherwise, if one of the following conditions is true, a value of bS equal to 2 shall be the output,
-
- i. the luma block containing sample p0 or the luma block containing sample q0 contains non-zero transform coefficient levels,
- ii. the syntax element nal_unit_type is equal to 20 and residual_prediction_flag is equal to 1 for the luma block containing sample p0 or the luma block containing sample q0 and the prediction array resPredX as derived in subclause S.8.5.14 contains non-zero samples, with X indicating the applicable component L, Cb, or Cr;
Otherwise, if one of the following conditions is true, a value of bS equal to 1 shall be the output,
-
- i. either luma samples p0 or q0 is in a macroblock coded using an inter prediction mode,
- ii. the luma samples p0 and q0 are in two separate slices with different base_id_plus1;
Otherwise, a value of Bs equal to 0 shall be the output;
Otherwise, if the samples p0 and q0 are both in macroblocks coded using the I_BL macroblock prediction mode, a value of Bs equal to 1 shall be the output.
Some embodiments of the present invention may be described with reference to
In these embodiments, the characteristics of two neighboring blocks, separated by a block boundary, are analyzed to characterize a block boundary adjacent to the blocks. In some embodiments the boundary between the blocks is characterized.
In exemplary embodiments, the block characteristics are first analyzed to determine whether one of the blocks is encoded using inter-layer texture prediction 310. If at least one of said neighboring blocks is encoded using inter-layer texture prediction, the blocks are then analyzed to determine whether either block has been encoded with an intra-prediction method other than inter-layer texture prediction 311. If one of the blocks has been encoded with an intra-prediction method other than inter-layer texture prediction, a first boundary strength indicator is used to characterize the target boundary 312.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction, the block characteristics are analyzed to determine whether either of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 314. If either of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients, a second boundary strength indicator is used to characterize the target boundary 316.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction 311 and none of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 314, a determination is made to determine whether the neighboring blocks are predicted with reference to different reference blocks 318. If the neighboring blocks are predicted with reference to different reference blocks 318, a third boundary strength indicator is used to characterize the target boundary 320.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction 311, none of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 314, and the neighboring blocks are not predicted with reference to different reference blocks 318, a fourth boundary strength indicator is used to characterize the target boundary 320.
In some embodiments, the boundary strength indicator may be used to trigger specific boundary filtering options. In some embodiments, a different filtering method may be used for each indicator. In some embodiments, a filtering method parameter may be adjusted in relation to the indicator. In some embodiments, the indicator may trigger how aggressively a boundary is filtered. In some exemplary embodiments, the first boundary strength indicator will trigger the most aggressive filtering of the boundary and the second, third and fourth boundary strength indicators will trigger less and less aggressive filtering in that order. In some embodiments, the fourth boundary strength indicator or another indicator will trigger no filtering at all for the associated boundary.
Some embodiments of the present invention may be described with reference to
In these embodiments, the characteristics of two neighboring blocks, separated by a block boundary, are analyzed to characterize a block boundary adjacent to the blocks. In some embodiments the boundary between the blocks is characterized.
In exemplary embodiments, the block characteristics are first analyzed to determine whether the blocks are in a spatial scalability layer 330. Another determination is then made to determine whether one of the blocks is encoded using inter-layer texture prediction 332. If at least one of said neighboring blocks is encoded using inter-layer texture prediction, the blocks are then analyzed to determine whether either block has been encoded with an intra-prediction method other than inter-layer texture prediction 334. If one of the blocks has been encoded with an intra-prediction method other than inter-layer texture prediction, a first boundary strength indicator is used to characterize the target boundary 336.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction, the block characteristics are analyzed to determine whether either of the neighboring blocks has non-zero transform coefficients 338. If either of the neighboring blocks has non-zero transform coefficients, a second boundary strength indicator is used to characterize the target boundary 340.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction, the block characteristics may be analyzed to determine whether a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 342. If a block from which one of the neighboring blocks were predicted has non-zero transform coefficients, a third boundary strength indicator is used to characterize the target boundary 344.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction 334 and none of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 338, 342, a determination is made to determine whether one of the neighboring blocks is encoded using an inter-prediction mode 346. If one of the neighboring blocks is encoded using an inter-prediction mode 346, a fourth boundary strength indicator may be used to characterize the target boundary 348.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction 334 and none of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 338, 342, a determination may be made to determine whether the neighboring blocks are predicted with reference to different reference blocks 350. If the neighboring blocks are predicted with reference to different reference blocks 350, a fifth boundary strength indicator is used to characterize the target boundary 352.
If one of the blocks has not been encoded with an intra-prediction method other than inter-layer texture prediction 334 and none of the neighboring blocks or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 338, 342, the blocks are not encoded in inter-prediction mode 346 and the neighboring blocks are not predicted with reference to different reference blocks 350, a sixth boundary strength indicator may be used to characterize the target boundary 354.
Some embodiments of the present invention may be described with reference to
In these embodiments, the characteristics of two neighboring blocks, separated by a block boundary, are analyzed to characterize a block boundary adjacent to the blocks. In some embodiments the boundary between the blocks is characterized.
In these embodiments, a SpatialScalabilityType must be non-zero 360. Another determination is then made to determine whether a luma sample from one of the blocks is encoded using inter-layer texture prediction 362 (I_BL). If at least one of said neighboring blocks is encoded using I_BL, the blocks are then analyzed to determine whether either block has been encoded with an intra-prediction method other than I_BL 364. If one of the blocks has been encoded with an intra-prediction method other than I_BL, a first boundary strength indicator is used to characterize the target boundary 365. In some embodiments the first boundary strength indicator will trigger the strongest or most aggressive deblocking filter operation. In some embodiments, this first indicator will be equal to 4.
If one of the blocks has not been encoded with an intra-prediction method other than I_BL, the block characteristics are analyzed to determine whether the luma samples of either of the neighboring blocks has non-zero transform coefficients 366. If the luma samples of either of the neighboring blocks has non-zero transform coefficients, a second boundary strength indicator is used to characterize the target boundary 367. In some embodiments this second boundary strength indicator will trigger an intermediate or second most aggressive deblocking filter operation. In some embodiments, this second indicator will be equal to 2.
If one of the blocks has not been encoded with an intra-prediction method other than I_BL 364 and none of the luma samples from either block have non-zero transform coefficients, a determination may be made to determine whether a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 368. If a block from which one of the neighboring blocks was predicted has non-zero transform coefficients, the second boundary strength indicator may again be used to characterize the target boundary 367.
If one of the blocks has not been encoded with an intra-prediction method other than I_BL 364 and none of the neighboring blocks 366 or a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 368, a determination may be made to determine whether the luma samples of one of the neighboring blocks is encoded using an inter-prediction mode 370. If the luma samples of one of the neighboring blocks is encoded using an inter-prediction mode 370, a third boundary strength indicator may be used to characterize the target boundary 371. In some embodiments this third boundary strength indicator will trigger an weaker or third most aggressive deblocking filter operation. In some embodiments, this third indicator will be equal to 1.
If one of the blocks has not been encoded with an intra-prediction method other than I_BL 364, none of the neighboring blocks 366 nor a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 368 and the luma samples of neighboring blocks are not encoded in inter-prediction mode 370, a determination may be made to determine whether luma samples from either of the neighboring blocks are predicted from different reference blocks 372. If the luma samples of any of the neighboring blocks are predicted with reference to different reference blocks 370, the third boundary strength indicator may again be used to characterize the target boundary 371.
If one of the blocks has not been encoded with an intra-prediction method other than I_BL 364, none of the neighboring blocks 366 nor a block from which one of the neighboring blocks was predicted has non-zero transform coefficients 368, the luma samples of neighboring blocks are not encoded in inter-prediction mode 370 and luma samples from the neighboring blocks are not predicted from different reference blocks 372, a fourth boundary strength indicator may be used to characterize the target boundary 373. In some embodiments this fourth boundary strength indicator may trigger a weakest or fourth most aggressive deblocking filter operation. In some embodiments, this fourth indicator may indicate that no filtering should take place. In some embodiments, this third indicator will be equal to 0.
For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or described features can be implemented by themselves, or in combination with other operations in either hardware or software.
The terms and expressions which have been employed in the forgoing specification are used therein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalence of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
Claims
1. A method for characterization of a block boundary between neighboring blocks within a spatial scalability enhancement layer wherein at least one of said neighboring blocks is encoded using inter-layer texture prediction (I_BL), said method comprising:
- a) characterizing said block boundary with a first boundary strength indicator when a luma sample from one of said neighboring blocks is encoded using an intra-prediction mode other than said I_BL mode;
- b) characterizing said block boundary with a second boundary strength indicator when, i) no luma sample from each of said neighboring blocks is encoded using an intra-prediction mode other than said I_BL mode; and ii) any of said neighboring blocks and blocks from which said neighboring blocks are predicted have non-zero transform coefficients;
- c) characterizing said block boundary with a third boundary strength indicator when, i) no luma sample from each of said neighboring blocks is encoded using an intra-prediction mode other than said I_BL mode; and ii) all of said neighboring blocks and blocks from which said neighboring blocks are predicted have no transform coefficients.
2. A method as described in claim 1 wherein said first boundary strength indicator triggers more aggressive smoothing than said second boundary strength indicator, and said second boundary strength indicator triggers more aggressive smoothing than said third boundary strength indicator when applying a deblocking filter to said block boundary.
Type: Application
Filed: Jan 25, 2011
Publication Date: May 19, 2011
Inventor: Shijun SUN (Vancouver, WA)
Application Number: 13/012,829
International Classification: H04N 11/02 (20060101);