Systems and methods for reduced bit-depth processing in video-related data with frequency weighting matrices
Embodiments of the present invention comprise systems and methods for processing of data related to video wherein reduced bit depth intermediate calculations are enabled.
Latest Sharp Kabushiki Kaisha Patents:
- Method of handover procedure and related device
- Methods for multiple active semi-persistent scheduling configurations
- Image forming device
- Method performed by a user equipment. and user equipment for a downlink control information size alignment
- Non-transitory computer-readable medium and device for performing inter prediction in video coding
This application is a continuation of U.S. patent application Ser. No. 10/326,459 entitled, METHODS AND SYSTEMS FOR EFFICIENT VIDEO-RELATED DATA PROCESSING, invented by Louis Kerofsky, filed Dec. 20, 2002 now U.S. Pat. No. 7,170,942, which is a continuation of U.S. patent application Ser. No. 10/139,036 entitled, METHOD FOR REDUCED BIT-DEPTH QUANTIZATION, invented by Louis Kerofsky, filed May 2, 2002 now U.S. Pat. No. 7,123,655, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/319,018 entitled, METHODS AND SYSTEMS FOR VIDEO CODING WITH JOINT QUANTIZATION AND NORMALIZATION PROCEDURES, invented by Louis Kerofsky, filed Nov. 30, 2001, and also claims the benefit of U.S. Provisional Patent Application Ser. No. 60/311,436 entitled, REDUCED BIT-DEPTH QUANTIZATION, invented by Louis Kerofsky, filed Aug. 9, 2001. This Application is a Divisional Reissue Application of Continuation Reissue application Ser. No. 12/837,154, filed Jul. 15, 2010 now U.S. Pat. No. Re. 43,091, of Reissue application Ser. No. 12/689,897, filed Jan. 19, 2010 now U.S. Pat. No. Re. 42,745, which itself is a reissue of U.S. Pat. No. 7,400,682 entitled SYSTEMS AND METHODS FOR REDUCED BIT-DEPTH PROCESSING IN VIDEO-RELATED DATA WITH FREQUENCY WEIGHTING MATRICES, which is a continuation of U.S. patent application Ser. No. 10/326,459 entitled, METHODS AND SYSTEMS FOR EFFICIENT VIDEO-RELATED DATA PROCESSING, invented by Louis Kerofsky, filed Dec. 20, 2002 now U.S. Pat. No. 7,170,942, which is a continuation of U.S. patent application Ser. No. 10/139,036 entitled, METHOD FOR REDUCED BIT-DEPTH QUANTIZATION, invented by Louis Kerofsky, filed May 2, 2002 now U.S. Pat. No. 7,123,655, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/319,018 entitled, METHODS AND SYSTEMS FOR VIDEO CODING WITH JOINT QUANTIZATION AND NORMALIZATION PROCEDURES, invented by Louis Kerofsky, filed Nov. 30, 2001, and also claims the benefit of U.S. Provisional Patent Application Ser. No. 60/311,436 entitled, REDUCED BIT-DEPTH QUANTIZATION, invented by Louis Kerofsky, filed Aug. 9, 2001. More than one Divisional Reissue Application has been filed for the reissue of application Ser. No. 12/837,154. The reissue applications are application Ser. Nos. 13/301,502 (the present application), 13/301,430, 13/301,476, and 13/301,526, all of which are divisional reissues of application Ser. No. 12/837,154, and all of which were filed on Nov. 21, 2011.
BACKGROUND OF THE INVENTION1. Field of the Invention
This invention generally relates to video compression techniques and, more particularly, to a method for reducing the bit size required in the computation of video coding transformations.
2. Description of the Related Art
A video information format provides visual information suitable to activate a television screen, or store on a video tape. Generally, video data is organized in a hierarchical order. A video sequence is divided into group of frames, and each group can be composed of a series of single frames. Each frame is roughly equivalent to a still picture, with the still pictures being updated often enough to simulate a presentation of continuous motion. A frame is further divided into slices, or horizontal sections which helps system design of error resilience. Each slice is coded independently so that errors do not propagate across slices. A slice consists of macroblocks. In H.26P and Motion Picture Experts Group (MPEG)-X standards, a macroblock is made up of 16×16 luma pixels and a corresponding set of chroma pixels, depending on the video format. A macroblock always has an integer number of blocks, with the 8×8 pixel matrix being the smallest coding unit.
Video compression is a critical component for any application which requires transmission or storage of video data. Compression techniques compensate for motion by reusing stored information in different areas of the frame (temporal redundancy). Compression also occurs by transforming data in the spatial domain to the frequency domain. Hybrid digital video compression, exploiting temporal redundancy by motion compensation and spatial redundancy by transformation, such as Discrete Cosine Transform (DCT), has been adapted in H.26P and MPEG-X international standards as the basis.
As stated in U.S. Pat. No. 6,317,767 (Wang), DCT and inverse discrete cosine transform (IDCT) are widely used operations in the signal processing of image data. Both are used, for example, in the international standards for moving picture video compression put forth by the MPEG. DCT has certain properties that produce simplified and efficient coding models. When applied to a matrix of pixel data, the DCT is a method of decomposing a block of data into a weighted sum of spatial frequencies, or DCT coefficients. Conversely, the IDCT is used to transform a matrix of DCT coefficients back to pixel data.
Digital video (DV) codecs are one example of a device using a DCT-based data compression method. In the blocking stage, the image frame is divided into N by N blocks of pixel information including, for example, brightness and color data for each pixel. A common block size is eight pixels horizontally by eight pixels vertically. The pixel blocks are then “shuffled” so that several blocks from different portions of the image are grouped together. Shuffling enhances the uniformity of image quality.
Different fields are recorded at different time incidents. For each block of pixel data, a motion detector looks for the difference between two fields of a frame. The motion information is sent to the next processing stage. In the next stage, pixel information is transformed using a DCT. An 8-8 DCT, for example, takes eight inputs and returns eight outputs in both vertical and horizontal directions. The resulting DCT coefficients are then weighted by multiplying each block of DCT coefficients by weighting constants.
The weighted DCT coefficients are quantized in the next stage. Quantization rounds off each DCT coefficient within a certain range of values to be the same number. Quantizing tends to set the higher frequency components of the frequency matrix to zero, resulting in much less data to be stored. Since the human eye is most sensitive to lower frequencies, however, very little perceptible image quality is lost by this stage.
The quantization stage includes converting the two-dimensional matrix of quantized coefficients to a one-dimensional linear stream of data by reading the matrix values in a zigzag pattern and dividing the one-dimensional linear stream of quantized coefficients into segments, where each segment consists of a string of zero coefficients followed by a non-zero quantized coefficient. Variable length coding (VLC) then is performed by transforming each segment, consisting of the number of zero coefficients and the amplitude of the non-zero coefficient in the segment, into a variable length codeword. Finally, a framing process packs every 30 blocks of variable length coded quantized coefficients into five fixed-length synchronization blocks.
Decoding is essentially the reverse of the encoding process described above. The digital stream is first deframed. Variable length decoding (VLD) then unpacks the data so that it may be restored to the individual coefficients. After inverse quantizing the coefficients, inverse weighting and an inverse discrete cosine transform (IDCT) are applied to the result. The inverse weights are the multiplicative inverses of the weights that were applied in the encoding process. The output of the inverse weighting function is then processed by the IDCT.
Much work has been done studying means of reducing the complexity in the calculation of DCT and IDCT. Algorithms that compute two-dimensional IDCTs are called “type I” algorithms. Type I algorithms are easy to implement on a parallel machine, that is, a computer formed of a plurality of processors operating simultaneously in parallel. For example, when using N parallel processors to perform a matrix multiplication on N×N matrices, N column multiplies can be simultaneously performed. Additionally, a parallel machine can be designed so as to contain special hardware or software instructions for performing fast matrix transposition.
One disadvantage of type I algorithms is that more multiplications are needed. The computation sequence of type I algorithms involves two matrix multiplies separated by a matrix transposition which, if N=4, for example, requires 64 additions and 48 multiplications for a total number of 112 instructions. It is well known by those skilled in the art that multiplications are very time-consuming for processors to perform and that system performance is often optimized by reducing the number of multiplications performed.
A two-dimensional IDCT can also be obtained by converting the transpose of the input matrix into a one-dimensional vector using an L function. Next, the tensor product of constant a matrix is obtained. The tensor product is then multiplied by the one-dimensional vector L. The result is converted back into an N×N matrix using the M function. Assuming again that N=4, the total number of instructions used by this computational sequence is 92 instructions (68 additions and 24 multiplications). Algorithms that perform two-dimensional IDCTs using this computational sequence are called “type II” algorithms. In type II algorithms, the two constant matrices are grouped together and performed as one operation. The advantage of type II algorithms is that they typically require fewer instructions (92 versus 112) and, in particular, fewer costly multiplications (24 versus 48). Type II algorithms, however, are very difficult to implement efficiently on a parallel machine. Type II algorithms tend to reorder the data very frequently and reordering data on a parallel machine is very time-intensive.
There exist numerous type I and type II algorithms for implementing IDCTs, however, dequantization has been treated as an independent step depending upon DCT and IDCT calculations. Efforts to provide bit exact DCT and IDCT definitions have led to the development of efficient integer transforms. These integer transforms typically increase the dynamic range of the calculations. As a result, the implementation of these algorithms requires processing and storing data that consists of more than 16 bits.
It would be advantageous if intermediate stage quantized coefficients could be limited to a maximum size in transform processes.
It would be advantageous if a quantization process could be developed that was useful for 16-bit processors.
It would be advantageous if a decoder implementation, dequantization, and inverse transformation could be implemented efficiently with a 16-bit processor. Likewise, it would be advantageous if the multiplication could be performed with no more than 16 bits, and if memory access required no more than 16 bits.
SUMMARY OF THE INVENTIONThe present invention is an improved process for video compression. Typical video coding algorithms predict one frame from previously coded frames. The error is subjected to a transform and the resulting values are quantized. The quantizer controls the degree of compression. The quantizer controls the amount of information used to represent the video and the quality of the reconstruction.
The problem is the interaction of the transform and quantization in video coding. In the past the transform and quantizer have been designed independently. The transform, typically the discrete cosine transform, is normalized. The result of the transform is quantized in standard ways using scalar or vector quantization. In prior work, MPEG-1, MPEG-2, MPEG-4, H.261, H.263, the definition of the inverse transform has not been bit exact. This allows the implementer some freedom to select a transform algorithm suitable for their platform. A drawback of this approach is the potential for encoder/decoder mismatch damaging the prediction loop. To solve this mismatch problem portions of the image are periodically coded without prediction. Current work, for example H.26L, has focused on using integer transforms that allow bit exact definition. Integer transforms may not normalized. The transform is designed so that a final shift can be used to normalize the results of the calculation rather than intermediate divisions. Quantization also requires division. H.26L provides an example of how these integer transforms are used along with quantization.
In H.26L Test Model Long-term 8, normalization is combined with quantization and implemented via integer multiplications and shifts following forward transform and quantization and following dequantization and inverse transform. H.26L TML uses two arrays of integers A(QP) and B(QP) indexed by quantization parameter (QP), see Table 1. These values are constrained by the relation shown below in Equation 1.
A(QP)·B(QP)·6762≈240.
Normalization and quantization are performed simultaneously using these integers and divisions by powers of 2. Transform coding in H.26L uses a 4×4 block size and an integer transform matrix T, Equation 2. For a 4×4 block X, the transform coefficients K are calculated as in Equation 3. From the transform coefficients, the quantization levels, L, are calculated by integer multiplication. At the decoder the levels are used to calculate a new set of coefficients, K′. Additional integer matrix transforms followed by a shift are used to calculate the reconstructed values X′. The encoder is allowed freedom in calculation and rounding of the forward transform. Both encoder and decoder must compute exactly the same answer for the inverse calculations.
Equation 2 H.26L Test Model 8 Transform Matrix
Y=T·X
K=Y·TT
L=(ATML(QP)·K)/220
K′=BTML(QP)·L
Y′=TT·K′
X′=(Y′·T)/220
-
- Where the intermediate result Y is the result of a one dimensional transform and the intermediate result Y′ is the result of a one dimensional inverse transform.
The dynamic range required during these calculations can be determined. The primary application involves 9-bit input, 8 bits plus sign, the dynamic range required by intermediate registers and memory accesses is presented in Table 2.
To maintain bit-exact definitions and incorporate quantization, the dynamic range of intermediate results can be large since division operations are postponed. The present invention combines quantization and normalization, to eliminate the growth of dynamic range of intermediate results. With the present invention the advantages of bit exact inverse transform and quantization definitions are kept, while controlling the bit depth required for these calculations. Reducing the required bit depth reduces the complexity required of a hardware implementation and enables efficient use of single instruction multiple data (SIMD) operations, such as the Intel MMX instruction set.
Accordingly, a method is provided for the quantization of a coefficient. The method comprises: receiving a coefficient K; receiving a quantization parameter (QP); forming a quantization value (L) from the coefficient K using a mantissa portion (Am(QP)) and an exponential portion (xAe(QP)). Typically, the value of x is 2.
In some aspects of the method, forming a quantization value (L) from the coefficient K includes:
In other aspects, the method further comprises: normalizing the quantization value by 2N as follows:
In some aspects, forming a quantization value includes forming a set of recursive quantization factors with a period P, where A(QP+P)=A(QP)/x. Therefore, forming a set of recursive quantization factors includes forming recursive mantissa factors, where Am(QP)=Am(QP mod P). Likewise, forming a set of recursive quantization factors includes forming recursive exponential factors, where Ae(QP)=Ae(QP mod P)−QP/P.
More specifically, receiving a coefficient K includes receiving a coefficient matrix K[i][j]. Then, forming a quantization value (L) from the coefficient matrix K[i][j] includes forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]).
Likewise, forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]) includes, for each particular value of QP, every element in the exponential portion matrix being the same value. Every element in the exponential portion matrix is the same value for a period (P) of QP values, where Ae(QP)=Ae(P*(QP/P)).
Additional details of the above-described method, including a method for forming a dequantization value (X1), from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (xBe(QP)), are provided below.
The dynamic range requirements of the combined transform and quantization is reduced by factoring the quantization parameters A(QP) and B(QP) into a mantissa and exponent terms as shown in Equation 4. With this structure, only the precision due to the mantissa term needs to be preserved during calculation. The exponent term can be included in the final normalization shift. This is illustrated in the sample calculation Equation 5.
Equation 4 Structure of Quantization Parameters
Aproposed(QP)=Amantissa(QP)·2A
Bproposed(QP)=Bmantissa(QP)·2B
Y=T·X
K=Y·TT
L=(Amantissa(QP)·K)/220−A
K′=TT·L
Y′=K′·T
X′=(Bmantissa(QP)·Y′)/220−B
To illustrate the present invention, a set of quantization parameters is presented that reduce the dynamic range requirement of an H.26L decoder to 16-bit memory access. The memory access of the inverse transform is reduced to 16 bits. Values for Amantissa, Aexponent, Bmantissa, Bexponent, Aproposed, Bproposed are defined for QP=0−5 as shown in Table 3. Additional values are determined by recursion, as shown in Equation 6. The structure of these values makes it possible to generate new quantization values in addition to those specified.
Amantissa(QP+6)=Amantissa(QP)
Bmantissa(QP+6)=Bmantissa(QP)
Aexp onent(QP+6)=Aexp onent(QP)−1
Bexp onent(QP+6)=Bexp onent(QP)+1
Using the defined parameters, the transform calculations can be modified to reduce the dynamic range as shown in Equation 5. Note how only the mantissa values contribute to the growth of dynamic range. The exponent factors are incorporated into the final normalization and do not impact the dynamic range of intermediate results.
With these values and computational method, the dynamic range at the decoder is reduced so only 16-bit memory access is needed as seen in Table 4.
Several refinements can be applied to the joint quantization/normalization procedure described above. The general technique of factoring the parameters into a mantissa and exponent forms the basis of these refinements.
The discussion above assumes all basis functions of the transform have an equal norm and are quantized identically. Some integer transforms have the property that different basis functions have different norms. The present invention technique has been generalized to support transforms having different norms by replacing the scalars A(QP) and B(QP) above by matrices A(QP)[i][j] and B(QP)[i][j]. These parameters are linked by a normalization relation of the form shown below, Equation 7, which is more general than the single relation shown in Equation 1.
Equation 7 Joint Quantization/Normalization of Matrices
A(QP)[i][j]·B(QP)[i][j]=N[i][j]
Following the method previously described, each element of each matrix is factored into a mantissa and an exponent term as illustrated in the equations below, Equation 8.
Equation 8 Factorization of Matrix Parameters
A(QP)[i][j]=Amantissa(QP)[i][j]·2A
B(QP)[i][j]=Bmantissa(QP)[i][j]·2B
A large number of parameters are required to describe these quantization and dequantization parameters. Several structural relations can be used to reduce the number of free parameters. The quantizer growth is designed so that the values of A are halved after each period P at the same time the values of B are doubled maintaining the normalization relation. Additionally, the values of Aexponent(QP)[i][j] and Bexponent(QP)[i][j] are independent of i, j and (QP) in the range [0,P−1]. This structure is summarized by structural equations, Equation 9. With this structure there are only two parameters Aexponent[0] and Bexponent[0].
Equation 9 Structure of Exponent Terms
Aexp onent(QP)[i][j]=Aexp onent[0]−QP/P
Bexp onent(QP)[i][j]=Bexp onent[0]−QP/P Bexp onent(QP)[i][j]=Bexp onent[0]+QP/P
A structure is also defined for the mantissa values. For each index pair (i,j), the mantissa values are periodic with period P. This is summarized by the structural equation, Equation 10. With this structure, there are P independent matrices for Amantissa and P independent matrices for Bmantissa reducing memory requirements and adding structure to the calculations.
Equation 10 Structure of Mantissa Terms
Amantissa(QP)[i][j]=Amantissa(QP % P)[i][j]
Bmantissa(QP)[i][j]=Bmantissa(QP % P)[i][j]
The inverse transform may include integer division that requires rounding. In cases of interest, the division is by a power of 2. The rounding error is reduced by designing the dequantization factors to be multiples of the same power of 2, giving no remainder following division.
Dequantization using the mantissa values Bmantissa(QP) gives dequantized values that are normalized differently depending upon QP. This must be compensated for following the inverse transform. A form of this calculation is shown in Equation 11.
Equation 11 Normalization of Inverse Transform I
K[i][j]=Bmantissa(QP % P)[i][j]·Level[i][j]
X=(T−1·K·T)/2(N−QP/P)
To eliminate the need for the inverse transform to compensate for this normalization difference, the dequantization operation is defined so that all dequantized values have the same normalization. The form of this calculation is shown in Equation 12.
Equation 12 Normalization of Inverse Transform II
K[i][j]=Bmantissa(QP % P)[i][j]·Level[i][j]
X=(T−1·K·T)/2N
An example follows that illustrates the present invention use of quantization matrices. The forward and inverse transforms defined in Equation 13 need a quantization matrix rather than a single scalar quantization value. Sample quantization and dequantization parameters are given. Equation 14 and 16, together with related calculations, illustrate the use of this invention. This example uses a period P=6.
The description of the forward transformation and forward quantization, Equation 18, are given below assuming input is in X, quantization parameter QP.
Equation 17 Forward Transform
K=Tforward·X·TforwardT
period=QP/6
phase=QP−6·period
Level[i][j]=(Q(phase)[i][j]·K[i][j])/2(17+period)
The description of dequantization, inverse transform, and normalization for this example is given below, Equation 19 and 20.
Equation 19 Dequantization
period=QP/6
phase=QP−6·period
K[i][j]=R(phase)[i][j]·Level[i][j]·2period
X′=Treverse·K·TreverseT
X″[i][j]=X′[i][j]/27
In some aspects of the method, forming a quantization value (L) from the coefficient K using a mantissa portion (Am(QP)) and an exponential portion (xAe(QP)) in Step 106 includes:
Some aspects of the method include a further step. Step 108 normalizes the quantization value by 2N as follows:
In other aspects, forming a quantization value in Step 106 includes forming a set of recursive quantization factors with a period P, where A(QP+P)=A(QP)/x. Likewise, forming a set of recursive quantization factors includes forming recursive mantissa factors, where Am(QP)=Am(QP mod P). Then, forming a set of recursive quantization factors includes forming recursive exponential factors, where Ae(QP)=Ae(QP mod P)−QP/P.
In some aspects, forming a quantization value includes forming a set of recursive quantization factors with a period P, where A(QP+P)=A(QP)/2. In other aspects, forming a set of recursive quantization factors includes forming recursive mantissa factors, where P=6. Likewise, forming a set of recursive quantization factors includes forming recursive exponential factors, where P=6.
In some aspects of the method, receiving a coefficient K in Step 102 includes receiving a coefficient matrix K[i][j]. Then, forming a quantization value (L) from the coefficient matrix K[i][j] using a mantissa portion (Am(QP) and an exponential portion (xAe(QP)) in Step 106 includes forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]). Likewise, forming a quantization value matrix (L[i][j]) using a mantissa portion matrix (Am(QP)[i][j]) and an exponential portion matrix (xAe(QP)[i][j]) includes, for each particular value of QP, every element in the exponential portion matrix being the same value. Typically, every element in the exponential portion matrix is the same value for a period (P) of QP values, where Ae(QP)=Ae(P*(QP/P)).
Some aspects of the method include a further step. Step 110 forms a dequantization value (X1) from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (xBe(QP)). Again, the exponential portion (xBe(QP)) typically includes x being the value 2.
In some aspects of the method, forming a dequantization value (X1) from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (2Be(QP)) includes:
Other aspects of the method include a further step, Step 112, of denormalizing the quantization value by 2N as follows:
In some aspects, forming a dequantization value in Step 110 includes forming a set of recursive dequantization factors with a period P, where B(QP+P)=x*B(QP). Then, forming a set of recursive dequantization factors includes forming recursive mantissa factors, where Bm(QP)=Bm(QP mod P). Further, forming a set of recursive dequantization factors includes forming recursive exponential factors, where Be(QP)=Be(QP mod P)+QP/P.
In some aspects, forming a set of recursive quantization factors with a period P includes the value of x being equal to 2, and forming recursive mantissa factors includes the value of P being equal to 6. Then, forming a set of recursive dequantization factors includes forming recursive exponential factors, where Be(QP)=Be(QP mod P)+QP/P.
In some aspects of the method, forming a dequantization value (X1), from the quantization value, using a mantissa portion (Bm(QP)) and an exponential portion (xBe(QP)) in Step 110 includes forming a dequantization value matrix (X1[i][j]) using a mantissa portion matrix (Bm(QP)[i][j]) and an exponential portion matrix (xBe(QP)[i][j]). Likewise, forming a dequantization value matrix (X1[i][j]) using a mantissa portion matrix (Bm(QP)[i][j]) and an exponential portion matrix (xBe(QP)[i][j]) includes, for each particular value of QP, every element in the exponential portion matrix being the same value. In some aspects, every element in the exponential portion matrix is the same value for a period (P) of QP values, where Be(QP)=Be(P*(QP/P)).
Another aspect of the invention includes a method for the dequantization of a coefficient. However, the process is essentially the same as Steps 110 and 112 above, and is not repeated in the interest of brevity.
A method for the quantization of a coefficient has been presented. An example is given illustrating a combined dequantization and normalization procedure applied to the H.26L video coding standard with a goal of reducing the bit-depth required at the decoder to 16 bits. The present invention concepts can also be used to meet other design goals within H.26L. In general, this invention has application to the combination of normalization and quantization calculations.
Embodiments of the present invention may be implemented as hardware, firmware, software and other implementations. Some embodiments may be implemented on general purpose computing devices or on computing devices specifically designed for implementation of these embodiments. Some embodiments may be stored in memory as a means of storing the embodiment or for the purpose of executing the embodiment on a computing device.
Some embodiments of the present invention comprise systems and methods for video encoding, as shown in
Quantization module 136 may have other inputs, such as user inputs 131 for establishing quantization parameters (QPs) and for other input. Quantization module 136 may use the transformation coefficients and the quantization parameters to determine quantization levels (L) in the video image. Quantization module 136 may use methods employing a mantissa portion and an exponential portion, however, other quantization methods may also be employed in the quantization modules 136 of embodiments of the present invention. These quantization levels 135 and quantization parameters 133 are output to a coding module 138 as well as a dequantization module (DQ) 140.
Output to the coding module 138 is encoded and transmitted outside the encoder for immediate decoding or storage. Coding module 138 may use variable length coding (VLC) in its coding processes. Coding module 138 may use arithmetic coding in its coding process.
Output from quantization module 136 is also received at dequantization module 140 to begin reconstruction of the image. This is done to keep an accurate accounting of prior frames. Dequantization module 140 performs a process with essentially the reverse effect as quantization module 136. Quantization levels or values (L) are dequantized yielding transform coefficients. Dequantization modules 140 may use methods employing a mantissa portion and an exponential portion as described herein.
The transform coefficients output from dequantization module 140 are sent to an inverse transformation (IT) module 142 where they are inverse transformed to a differential image 141. This differential image 141 is then combined with data from prior image frames 145 to form a video frame 149 that may be input to a frame memory 146 for reference to succeeding frames.
Video frame 149 may also serve as input to a motion estimation module 147, which also receives input image data 130. These inputs may be used to predict image similarities and help compress image data. Output from motion estimation module 147 is sent to motion compensation module 148 and combined with output data from coding module 138, which is sent out for later decoding and eventual image viewing.
Motion compensation module 148 uses the predicted image data to reduce frame data requirements; its output is subtracted from input image data 130.
Some embodiments of the present invention comprise systems and methods for video decoding, as shown in
Decoder module 152 may employ variable length decoding methods if they were used in the encoding process. Other decoding methods may also be used as dictated by the type of encoded data 150. Decoding module 152 performs essentially the reverse process as coding module 138. Output from decoding module 152 may comprise quantization parameters 156 and quantization values 154. Other output may comprise motion estimation data and image prediction data that may be sent directly to a motion compensation module 166.
Typically, quantization parameters 156 and quantization values 154 are output to a dequantization module 158, where quantization values are converted back to transform coefficients. These coefficients are then sent to an inverse transformation module 160 for conversion back to spatial domain image data 161.
The motion compensation unit 166 uses motion vector data and the frame memory 164 to construct a reference image 165.
Image data 161 represents a differential image that must be combined 162 with prior image data 165 to form a video frame 163. This video frame 163 is output 168 for further processing, display or other purposes and may be stored in frame memory 164 and used for reference with subsequent frames.
In some embodiments of the present invention, as illustrated in
When desired, encoded video data may be read from storage media 106 and decoded by a decoder or decoding portion 108 for output 110 to a display or other device.
In some embodiments of the present invention, as illustrated in
In some embodiments of the present invention, as illustrated in
In some embodiments of the present invention, as illustrated in
Some embodiments of the present invention, as illustrated in
Some embodiments of the present invention, as illustrated in
Some embodiments of the present invention may be stored on computer-readable media such as magnetic media, optical media, and other media as well as combinations of media. Some embodiments may also be transmitted as signals across networks and communication media. These transmissions and storage actions may take place as part of operation of embodiments of the present invention or as a way of transmitting the embodiment to a destination.
Typical methods of dequantization, inverse transformation, and normalization may be expressed mathematically in equation form. These methods, as illustrated in
In embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved. The processes of these embodiments, illustrated in
In embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved together with a reduction in memory needed to store dequantization parameters. The processes of these embodiments, illustrated in
{tilde over (w)}α=cα·RQP % P>>(QP/P) Equation 27
x″α=[{tilde over (x)}′α+(ƒ<<EQP % P)]>>(M−EQP % P) Equation 28
In embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved together with a reduction in memory needed to store dequantization parameters. Additionally, the normalization process is independent of QP. This eliminates the need to communicate an exponential value for use in the normalization process. In these embodiments, the exponential portion, previously described as EQP is held constant and incorporated into normalization 248 thereby negating the need to transmit the value as is done in previously described embodiments. The processes of these embodiments, illustrated in
x″α=({tilde over (x)}′α+2z)>>{tilde over (M)} Equation 29
In further embodiments of the present invention, a reduction in bit depth for inverse transformation calculations is achieved together with a reduction in memory needed to store dequantization parameters and the normalization process is independent of QP thereby eliminating the need to communicate an exponential value for use in the normalization process. These embodiments also express the quantization scaling factor mantissa portion as a matrix. This matrix format allows frequency dependent quantization, which allows the processes of these embodiments to be used in coding schemes that comprise frequency-dependent transformation.
In these embodiments, the exponential portion, previously described as EQP may be held constant and incorporated into normalization 248 as previously explained. The processes of these embodiments, illustrated in
In these embodiments, the equivalent of a dequantization scaling factor SαQP is factored 254 into a mantissa portion RαQP 252 and a constant exponential portion EQP that is incorporated into normalization 248. The mantissa portion, RαQP 252, may double with each increment of QP by P as previously described. The exponential portion EQP is constant. The mantissa portion 252 is used during dequantization 250 to calculate the reconstructed transform coefficients ({tilde over (w)}α) 242, which are used in the inverse transformation process 228 to calculate reconstructed samples ({tilde over (x)}′α) 244. These reconstructed samples may then be normalized using the constant exponential portion that is incorporated into normalization 248 according to Equation 27, thereby yielding reconstructed samples, x″α 234. Using these methods, the values of {tilde over (w)}α and {tilde over (x)}′α require EQP fewer bits for representation. This factorization enables mathematically equivalent calculation of the reconstructed samples using lower intermediate precision as described above and in Equations 25, 27 & 29. In these embodiments the dequantization scaling factor portion is expressed as a matrix. This format is expressed in Equation 30 with the subscript α.
{tilde over (w)}α=cα·RαQP % P>>(QP/P) Equation 30
Typical methods of quantization may be expressed mathematically in equation form. These methods, as illustrated in
g=k·SQP Equation 31
c=g >>M Equation 32
In embodiments of the present invention, a reduction in bit depth for quantization calculations is achieved together with a reduction in memory needed to store quantization parameters. The processes of these embodiments, illustrated in
{tilde over (g)}=k·RQP % P Equation 33
c={tilde over (g)}>>(M−EQP) Equation 34
Other variations and embodiments of the invention will occur to those skilled in the art.
Claims
1. A method for dequantization and inverse transformation, said method comprising:
- (a) receiving a matrix of quantized coefficient levels;
- (b) receiving at least one quantization parameter (QP);
- (c) determining a reconstructed transform coefficient (RTC) matrix wherein each value in said quantized coefficient level matrix is scaled by a value in a scaling matrix which is dependent on QP % P, where P is a constant value;
- (d) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said RTC matrix values; and
- (e) computing reconstructed samples, by normalizing the SRS values.
2. A method as described in claim 1 wherein P=6.
3. A method as described in claim 1 wherein said scaling matrix is a 4×4 matrix.
4. A method as described in claim 1 wherein said scaling matrix is an 8×8 matrix.
5. A method as described in claim 1 wherein said at least one quantization parameter (QP) comprises a chroma quantization parameter.
6. A method as described in claim 1 wherein said at least one quantization parameter (QP) comprises a luma quantization parameter.
7. A method as described in claim 1 wherein said at least one quantization parameter (QP) comprises a chroma quantization parameter for each chroma channel.
8. A method as described in claim 1 wherein said at least one quantization parameter (QP) comprises a chroma quantization parameter for each chroma channel and a luma quantization parameter.
9. A method for dequantization and inverse transformation, said method comprising:
- (a) receiving a matrix of quantized coefficient levels (QCL matrix);
- (b) receiving a quantization parameter (QP);
- (c) calculating a scaling matrix using a weighting matrix scaled by a dequantization matrix selected using QP % P;
- (d) determining a reconstructed transform coefficient (RTC) matrix wherein said QCL matrix is scaled by said scaling matrix;
- (e) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said RTC matrix values; and
- (f) computing reconstructed samples, by normalizing the SRS values with a constant shift operation.
10. A method as described in claim 9 further comprising shifting said RTC matrix values by a value dependent on QP/P before said computing scaled reconstructed samples.
11. A method for dequantization and inverse transformation, said method comprising:
- (a) fixing a limited set of scaling matrices, wherein each of said scaling matrices in said limited set of scaling matrices is dependent on an associated quantization parameter QP and an associated constant parameter P according to the relation QP % P;
- (b) receiving a quantized coefficient level (QCL) matrix;
- (c) determining a reconstructed transform coefficient (RTC) matrix wherein each value in said quantized coefficient level matrix is scaled by a value in a scaling matrix that is selected from said limited set of scaling matrices;
- (d) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said RTC matrix values; and
- (e) computing reconstructed samples, by normalizing the SRS values.
12. An apparatus for dequantization and inverse transformation, said apparatus comprising:
- (a) a QCL receiver for receiving a matrix of quantized coefficient levels (QCLs);
- (b) a QP receiver for receiving at least one quantisation parameter (QP);
- (c) a processor, wherein said processor is capable of determining a reconstructed transform coefficient (RTC) matrix wherein each value in said quantized coefficient level matrix is scaled by a value in a scaling matrix which is dependent on QP % P, where P is a constant value;
- (d) said processor comprising a further capability of computing scaled reconstructed samples (SRS) by performing an inverse transformation on said RTC matrix values; and
- (e) said processor comprising the capability of computing reconstructed samples, by normalizing said SRS values.
13. A computer-readable medium encoded with computer executable instructions for dequantization and inverse transformation, said instructions comprising:
- (a) receiving a matrix of quantized coefficient levels;
- (b) receiving at least one quantization parameter (QP);
- (c) determining a reconstructed transform coefficient (RTC) matrix wherein each value in said quantized coefficient level matrix is scaled by a value in a scaling matrix which is dependent on QP % P, where P is a constant value;
- (d) computing scaled reconstructed samples (SRS) by performing an inverse transformation on said RTC matrix values; and
- (e) computing reconstructed samples, by normalizing the SRS values.
14. A video coding device for quantization configured to derive a quantized value matrix L by quantizing a transform coefficient matrix K, wherein:
- the video coding device is configured to derive an element L[i][j] of the quantized value matrix L, by using an element K[i][j] of the transform coefficient matrix K, a quantization parameter QP and a mantissa portion matrix element A(QP)[i][j] being a function of the quantization parameter QP, as
- L[i][j]−K[i][j]×A(QP mod P)[i][j]×2A0-QP/P, where A0 and P are integers.
15. A video coding device according to claim 14, wherein the mantissa portion matrix element has, by representing the quantization parameter QP as m, the following structures; where Mm,0, Mm,1, Mm,2 are elements of P×3 matrix.
- A(m)[i][j]=Mm,0 (where (i,j) are both even numbers)
- A(m)[i][j]=Mm,1 (where (i,j) are both odd numbers)
- A(m)[i][j]=Mm,2 (where (i,j) are other than the above-described cases),
16. A video decoding device for dequantization configured to derive a transform coefficient matrix K by dequantizing a quantized value matrix L, wherein:
- the video decoding device is configured to derive an element K[i][j] of the transform coefficient matrix K, by using an element L[i][j] of the quantized value matrix L, a quantization parameter QP and a mantissa portion matrix element B(QP)[i][j] being a function of the quantization parameter QP, as
- K[i][j]=L[i][j]×B(QP mod P)[i][j]×2B0+QP/P, where P is an integer.
17. A video decoding device according to claim 16, wherein the mantissa portion matrix element has, by representing the quantization parameter QP as m, the following structures;
- B(m)[i][j]=Sm,0 (where (i,j) are both even numbers)
- B(m)[i][j]=Sm,1 (where (i,j) are both odd numbers)
- B(m)[i][j]=Sm,2 (where (i,j) are other than the above-described cases),
- where Sm,0, Sm,1, Sm,2 are elements of P×3 matrix.
5230038 | July 20, 1993 | Fielder et al. |
5345408 | September 6, 1994 | Hoogenboom |
5471412 | November 28, 1995 | Shyu |
5479364 | December 26, 1995 | Jones et al. |
5590067 | December 31, 1996 | Jones et al. |
5594678 | January 14, 1997 | Jones et al. |
5596517 | January 21, 1997 | Jones et al. |
5640159 | June 17, 1997 | Furlan et al. |
5748793 | May 5, 1998 | Sanpei |
5754457 | May 19, 1998 | Eitan et al. |
5764553 | June 9, 1998 | Hong |
5822003 | October 13, 1998 | Girod et al. |
5845112 | December 1, 1998 | Nguyen et al. |
6081552 | June 27, 2000 | Stevenson et al. |
6160920 | December 12, 2000 | Shyu |
6212236 | April 3, 2001 | Nishida et al. |
6856262 | February 15, 2005 | Mayer et al. |
6876703 | April 5, 2005 | Ismaeil et al. |
20030099291 | May 29, 2003 | Kerofsky |
20030112876 | June 19, 2003 | Kerofsky |
20030123553 | July 3, 2003 | Kerofsky |
20040013202 | January 22, 2004 | Lainema |
20040046754 | March 11, 2004 | Mayer et al. |
2221181 | December 1993 | CA |
2264609 | September 1993 | GB |
3-270573 | December 1991 | JP |
4-503136 | June 1992 | JP |
4-504192 | July 1992 | JP |
04-222121 | August 1992 | JP |
4-222121 | August 1992 | JP |
06-046269 | February 1993 | JP |
5-95483 | April 1993 | JP |
07-099578 | September 1993 | JP |
05-307467 | November 1993 | JP |
5-307467 | November 1993 | JP |
6-46269 | February 1994 | JP |
06-053839 | February 1994 | JP |
6-53839 | February 1994 | JP |
06-077842 | March 1994 | JP |
6-77842 | March 1994 | JP |
7-99578 | April 1995 | JP |
2004-506990 | March 2004 | JP |
0172902 | March 1999 | KR |
0172902 | March 1999 | KR |
WO 90/09022 | August 1990 | WO |
WO 90/09064 | August 1990 | WO |
- Bjontegaard, “H.26L Test Model Long Term No. 7 (TML-7) Draft0”, ITU—Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (VCEG), Document VCEG-M81, Thirteenth Meeting, Austin, Texas, USA, Apr. 2-4, 2001, pp. 1-36.
- Bjontegaard, “H.26L Test Model Long Term No. 8 (TML-8) Draft0”, ITU—Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (VCEG), Document VCEG-Nxx, Apr. 2, 2001 (Generated: Jun. 28, 2001), pp. 1-46.
- Extended European Search Report, dated Nov. 4, 2010, for European Application No. 10179630.8.
- Extended European Search Report, dated Nov. 4, 2010, for European Application No. 10179636.5.
- Hallapuro et al., “Low Complexity (I)DCT”, ITU-Telecommunications Standardization Sector, Study Group 16 Question 6, Video Coding Experts Group (VCEG), 15th Meeting: Pattaya, Thailand, Dec. 4-6, 2001, pp. 1-11.
- Hallapuro et al., “Low Complexity Transform and Quantization—Part II: Extensions,” Document: JVT-B039, Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), 2nd Meeting: Geneva, CH, Jan. 29-Feb. 1, 2002, pp. 1-14.
- Hallapuro, et al., “Low Complexity Transform and Quantization—Part 1: Basic Implementation,” Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JCTI/SC29/WG11 and ITU-T SG16 Q.6), Document: JVT-B038, XP030005037, Jan. 14, 2002, pp. 1-18.
- Joint Video Team, T. Wiegand (Contact), “Draft ITU-T Recommendation H.264 (a.k.a. “H.26L”)”, ITU-Telecommunications Satndardization Sector, Study Group 16 Question 6, Video Coding Experts Group (VCEG), Document: VCEG-P07, 16th Meeting: Fairfax, Virginia, USA, May 6-10, 2002, 141 pages.
- Kerofsky et al., “Reduced Bit-Depth Quantization,” ITU—Telecommunications Standardization Sector, Study Group 16 Question 6, Video Coding Experts Group (VCEG), Document VCEG-N20, Fourteenth Meeting: Santa Barbara, CA, USA, Sep. 24-27, 2001, pp. 1-14.
- Kerofsky, “Modifications to the JVT IDCT”, Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document: JVT-C25, 3rd Meeting: Fairfax, Virginia, USA, May 6-10, 2002, pp. 1-14.
- Kerofsky, “Notes on JVT IDCT”, Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), Document: JVT-C24, 3rd Meeting: Fairfax, Virginia, USA, May 6-10, 2002, pp. 1-8.
- Klomp et al., “TE1: Cross-Checking Results of DMVD Proposal JCTVC-B076 (MediaTek Inc.)”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, Document: JCTVC-B119, 2nd Meeting: Geneva, CH, Jul. 21-28, 2010, 1 page.
- Korean Office Action, dated Jan. 13, 2006, for Korean Application No. 10-2004-7001865 (English translation only provided).
- Lepley et al., “Report on Core Experiment CodEff9: Integer Quantization”, Coding of Still Pictures, ISO/IEC JTC1/SC29/WG1 (ITU-T SG8), XP017205064, Oct. 21, 1998, 5 pages.
- Liang et al., “A 16-bit Architecture for H.26L, Treating DCT Transforms and Quantization”, ITU—Telecommunications Standardization Sector, Study Group 16 Question 6, Video Coding Experts Group (VCEG), Document VCEG-M16, XP002332050, Austin, TX, USA, Apr. 2-4, 2001, pp. 1-17.
- Sun et al., “Global Motion Vector Coding (GMVC)”, ITU-Telecommunications Standardization Sector, Study Group 16 Question 6, Video Coding Experts Group (VCEG), Document: VCEG-O20, Fifteenth Meeting: Pattaya, Thailand, Dec. 4-7, 2001, pp. 1-10.
- “H.26L Test Model Long Term No. 8(TML-8) Drafto” ITU-T Telecommunication Standardization Sector of ITU, Genevam CH, Apr. 2, 2001, pp. 1-54 XP001089814.
- Liang, J. , Tran T. , Topiwala P .: “a 16-bit architecture for H .26L, treating DCT Transforms and quantization” Document VCEG-M16, ITU—Elecommuntcations Standardization Sector,Study Group 16 Question 6, Video Coding Experts Group (VCEG), ‘Online!Apr. 2, 2001, pp. 1-17, XP002332050 Austin, Texes,USA Rerieved from the Internet ; URL: http://www.ensc.sfu.ca/people/faculty/jiel/papers/H26L—-Proposal.doc !retrieved on Jun. 8, 2005 abstract Section “Introduction”, p. 1 Section 1.2 “Quantization in TML 5.2”, pp. 2-3 Section 2.4 “Scaling factors and Quantization”, pp. 6-7.
- Kerofsky L., Lei S.; “Reduced bit—depth quantization” Document VCEG-N20, ITU-Telecommuncations Standardization Sector, Study Group 16 Question 6: Video Coding Experts Group (VCEG), ‘Online! Sep. 24, 2001 , pp. 1-14 , XP002332051 Santa Barbara, CA USA Retrieved from thr internet: URL: ftp3.itu.int/av-arch/video-site/0109—S AN/vceg-n20.DOC> Retrieved on Jun. 8, 2008! Section “Introduction”, pp. 1-2 Section “Proposed Quantization values”, pp. 3-4 Section “Appendix”, Table 10 and Equations 4,5.
- Gisele Bjontegard, “H.26L Test Model Long Term No. 8 (TML-8) draft0”, document VCEG-M81, ITU-T Video Coding Experts Group (VCEG), Austin, Texas, Apr. 2001.
Type: Grant
Filed: Nov 21, 2011
Date of Patent: May 21, 2013
Assignee: Sharp Kabushiki Kaisha (Osaka)
Inventor: Louis Joseph Kerofsky (Camas, WA)
Primary Examiner: Allen Wong
Application Number: 13/301,502
International Classification: H04N 7/12 (20060101);