Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto

- Dolby Labs

A method of coding at least one image comprising the steps of splitting the image into a plurality of blocks, of grouping said blocks into a predetermined number of subsets of blocks, of coding each of said subsets of blocks in parallel, the blocks of a subset considered being coded according to a predetermined sequential order of traversal. The coding step comprises, for a current block of a subset considered, the sub-step of predictive coding of said current block with respect to at least one previously coded and decoded block, and the sub-step of entropy coding of said current block on the basis of at least one probability of appearance of a symbol.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 14/003,014, filed Sep. 4, 2013, which is the U.S. national phase of International Patent Application No. PCT/FR2012/050380, filed Feb. 23, 2012, which claims the benefit of French Application No. 1151849, filed Mar. 7, 2011, the entire contents of which are incorporated herein by reference.

FIELD OF THE INVENTION

The present invention pertains generally to the field of image processing, and more precisely to the coding and to the decoding of digital images and of sequences of digital images.

The invention can thus, especially, be applied to the video coding implemented in current video coders (MPEG, H.264, etc) or forthcoming video coders (ITU-T/VCEG (H.265) or ISO/MPEG (HVC).

BACKGROUND OF THE INVENTION

Current video coders (MPEG, H264, etc.) use a block-wise representation of the video sequence. The images are cut up into macro-blocks, each macro-block is itself cut up into blocks and each block, or macro-block, is coded by intra-image or inter-image prediction. Thus, certain images are coded by spatial prediction (intra prediction), while other images are coded by temporal prediction (inter prediction) with respect to one or more coded-decoded reference images, with the aid of a motion compensation known by the person skilled in the art. Moreover, for each block can be coded a residual block corresponding to the original block decreased by a prediction. The coefficients of this block are quantized after optional transformation, and then coded by an entropy coder.

Intra prediction and inter prediction require that certain blocks which have been previously coded and decoded are available, so as to be used, either at the decoder or at the coder, to predict the current block. A schematic example of such a predictive coding is represented in FIG. 1A, in which an image IN is divided into blocks, a current block MBi of this image being subjected to a predictive coding with respect to a predetermined number of three blocks MBr1, MBr2 and MBr3 previously coded and decoded, such as designated by the hatched arrows. The aforementioned three blocks specifically comprise the block MBr1 situated immediately to the left of the current block MBi, and the two blocks MBr2 and MBr3 situated respectively immediately above and to the above right of the current block MBi.

Of more particular interest here is the entropy coder. The entropy coder encodes the information in its order of arrival. Typically a row-by-row traversal of the blocks is carried out, of “raster-scan” type, as illustrated in FIG. 1A by the reference PRS, starting from the block at the top left of the image. For each block, the various items of information necessary for the representation of the block (type of block, mode of prediction, residual coefficients, etc.) are dispatched sequentially to the entropy coder.

An effective arithmetical coder of reasonable complexity is already known, called “CABAC” (“Context Adaptive Binary Arithmetic Coder”), introduced into the AVC compression standard (also known as ISO-MPEG4 part 10 and ITU-T H.264).

This entropy coder implements various concepts;

    • arithmetical coding: the coder, such as described initially in the document J. Rissanen and G. G. Langdon Jr, “Universal modeling and coding,” IEEE Trans. Inform. Theory, vol. IT-27, pp. 12-23, January 1981, uses, to code a symbol, a probability of occurrence of this symbol;
    • adaptation to context: this entails adapting the probability of occurrence of the symbols to be coded. On the one hand, on-the-fly learning is carried out. On the other hand, according to the state of the previously coded information, a specific context is used for the coding. To each context there corresponds an inherent probability of occurrence of the symbol. For example a context corresponds to a type of coded symbol (the representation of a coefficient of a residual, signaling of coding mode, etc.) according to a given configuration, or a state of the neighborhood (for example the number of “intra” modes selected from the neighborhood, etc.);
    • binarization: the symbols to be coded are cast into the form of a string of bits. Subsequently, these various bits are successively dispatched to the binary entropy coder.

Thus, this entropy coder implements, for each context used, a system for learning probabilities on the fly with respect to the previously coded symbols for the context considered. This learning is based on the order of coding of these symbols. Typically, the image is traversed according to an order of “raster-scan” type, described hereinabove.

During the coding of a given symbol b that may equal 0 or 1, the learning of the probability Pi of occurrence of this symbol is updated for a current block MBi in the following manner:

p i ( b = 0 ) = α · p i - 1 ( b = 0 ) + { ( 1 - α ) if coded bit is 0 0 otherwise
where α is a predetermined value, for example 0.95 and Pi−1 is the probability of symbol occurrence calculated during the last occurrence of this symbol.

A schematic example of such an entropy coding is represented in FIG. 1A, in which a current block MBi of the image IN is subjected to an entropy coding. When the entropy coding of the block MBi begins, the symbol occurrence probabilities used are those obtained after coding of a previously coded and decoded block, which is that which immediately precedes the current block MBi in accordance with the aforementioned row-by-row traversal of the blocks of “raster scan” type. Such a learning based on block-by-block dependency is represented in FIG. 1A for certain blocks only for the sake of clarity of the figure, by the thin-line arrows.

The drawback of such a type of entropy coding resides in the fact that during the coding of a symbol situated at the start of a row, the probabilities used correspond mainly to those observed for the symbols situated at the end of the previous row, having regard to the “raster scan” traversal of the blocks. Now, on account of the possible spatial variation of the probabilities of the symbols (for example for a symbol related to an item of motion information, the motion situated in the right part of an image may be different from that observed in the left part and therefore likewise for the local probabilities stemming therefrom), a lack of local appropriateness of the probabilities may be observed, with the risk of causing a loss of effectiveness during coding.

The document “Annex A: CDCM Video Codec Decoder Specification” available at the Internet address http://wftp3.itu.int/av-arch/jctvc-site/201004_A_Dresden/JCTVC-A114-AnnexA.doc (on 8 Feb. 2011) describes a coding method which alleviates the drawback mentioned above. The coding method described in the above document comprises, as illustrated in FIG. 1B:

    • a step of cutting an image IN into a plurality of blocks,
    • a step of predictive coding of a current block MBi of this image with respect to a predetermined number of three blocks MBr1, MBr2 and MBr3 previously coded and decoded, such as designated by the hatched arrows. The aforementioned three blocks specifically comprise the block MBr1 situated immediately to the left of the current block MBi, and the two blocks MBr2 and MBr3 situated respectively immediately above and to the above right of the current block MBi,
    • a step of entropy coding of the blocks of the image IN, according to which each block uses the probabilities of symbol occurrence calculated respectively for the coded and decoded block which is situated immediately above the current block and for the coded and decoded block which is situated immediately to the left of the current block, when these blocks are available. This use of the probabilities of symbol occurrence is represented partially in FIG. 1B, for the sake of clarity of the latter, by the thin-line arrows.

The advantage of such an entropy coding is that it exploits the probabilities arising from the immediate environment of the current block, thereby making it possible to achieve higher coding performance. Furthermore, the coding technique used makes it possible to code in parallel a predetermined number of pairwise neighboring subsets of blocks. In the example represented in FIG. 1B, three subsets SE1, SE2 and SE3 are coded in parallel, each subset consisting in this example of a row of blocks, represented dashed. Of course, such a coding requires that the blocks situated respectively above and above to the right of the current block be available.

A drawback of this parallel coding technique is that, to allow access to a probability of symbol occurrence calculated for the block situated immediately above the current block, it is necessary to store a quantity of probabilities associated with a row of blocks. If the second row of blocks SE2 is considered for example in FIG. 1B, the first block of this row is subjected to an entropy coding by using the probabilities of symbol occurrence calculated for the first block of the previous first row SE1. On completion of the coding of the first block of the second row, the state of the value V1 of probability of occurrence is stored in a buffer memory MT. The second block of the second row SE2 is thereafter subjected to an entropy coding by using the probabilities of symbol occurrence calculated at one and the same time for the second block of the first row SE1 and the first block of the second row SE2. On completion of the coding of the second block of the second row, the state of the value V2 of probability of occurrence is stored in the buffer memory MT. This procedure is undertaken until the last block of the second row SE2. Since the quantity of probabilities is very large (there exist as many probabilities as the combination of the number of syntax elements with the number of associated contexts), the storage of these probabilities over an entire row is expensive in terms of memory resources.

SUBJECT AND SUMMARY OF THE INVENTION

One of the aims of the invention is to remedy drawbacks of the aforementioned prior art.

For this purpose, a subject of the present invention relates to a method of coding at least one image comprising the steps of:

    • cutting of the image into a plurality of blocks,
    • grouping of the blocks into a predetermined number of subsets of blocks,
    • coding in parallel of each of said subsets of blocks, the blocks of a subset considered being coded according to a predetermined sequential order of traversal, said coding step comprising, for a current block of a subset considered, the sub-steps of:
    • predictive coding of the current block with respect to at least one previously coded and decoded block,
    • entropy coding of the current block by learning of at least one probability of symbol occurrence.

The method according to the invention is noteworthy in that:

    • in the case where the current block is the first block to be coded of a subset considered, the probability of symbol occurrence is that which was calculated for a coded and decoded predetermined block of at least one other subset,
    • in the case where the current block is a block of the subset considered, other than the first block, the probability of symbol occurrence is that which was calculated for at least one coded and decoded block belonging to the same subset.

Such an arrangement makes it possible to store in the buffer memory of the coder a much smaller quantity of probabilities of occurrence of symbols, since the entropy coding of a current block other than the first block of a subset of blocks no longer necessarily requires the use of the probabilities of symbol occurrence calculated for a previously coded and decoded block which is situated above a current block in another subset.

Such an arrangement furthermore makes it possible to maintain the existing compression performance, since the entropy coding of a current block uses probabilities of symbol occurrence calculated for another previously coded and decoded block of the subset to which the current block belongs, and that consequently a learning has already been carried out through the updating of the probabilities so that the latter are in accordance with the statistics of the video signal.

The main advantage of using the probabilities of symbol occurrence calculated for the first block of said other subset during the entropy coding of the first current block of a considered subset of blocks is that of economizing on the coders buffer memory by storing in the latter just the update of said probabilities of occurrence of symbols, without taking into account the symbol occurrence probabilities learned by the other consecutive blocks of said other subset.

The main advantage of using the symbol occurrence probabilities calculated for a block of said other subset, other than the first block, for example the second block, during the entropy coding of the first current block of a considered subset of blocks is that of obtaining more precise and therefore better learning of the probabilities of occurrence of symbols, thereby giving rise to better video compression performance.

In a particular embodiment, the coded and decoded block belonging to the same subset as the current block to be coded other than the first block of the subset is that which is the nearest neighbor of the current block to be coded.

Such an arrangement thus makes it possible to store just the symbol occurrence probabilities learned during the entropy coding of a first block of a subset considered, since in this particular case, account is taken only of the symbol occurrence probability calculated for the block situated above the first current block and belonging to another subset. This results in optimization of the reduction in the size of the coder's memory resources.

In another particular embodiment, in the case where the predictive coding of a block of a subset considered is intended to be performed with respect to a predetermined number of previously coded and decoded blocks of a subset other than said subset considered, the parallel coding of the blocks of said subset considered is performed with a shift by said predetermined number of blocks with respect to the subset of blocks immediately preceding in the order in which the parallel coding is performed.

Such an arrangement allows the achieving, for a current subset of blocks to be coded, a synchronization of the progress of processing of the blocks of the subset of blocks preceding the current subset in the order in which the parallel coding is performed, thereby making it possible to guarantee the availability of the block or blocks of the preceding subset which are used for the coding of a current block. In this manner, the step of verifying the availability of this or these blocks of the preceding subset, such as are implemented in parallel coders of the prior art, can advantageously be omitted, thereby allowing an acceleration of the processing time required to process the blocks in the coder according to the invention.

Correlatively, the invention further relates to a device for coding at least one image comprising:

    • means for cutting the image into a plurality of blocks,
    • means for grouping the blocks into a predetermined number of subsets of blocks,
    • means for parallel coding of each of the subsets of blocks, the blocks of a subset considered being coded according to a predetermined sequential order of traversal, the coding means comprising, for a current block of a subset considered:
    • sub-means of predictive coding of the current block with respect to at least one previously coded and decoded block,
    • sub-means of entropy coding of the current block on the basis of at least one probability of symbol occurrence.

Such a coding device is noteworthy in that:

    • in the case where the current block is the first block to be coded of a subset considered, the sub-means of entropy coding take account, for the entropy coding of the first current block, of the probability of symbol occurrence which was calculated for a coded and decoded predetermined block of at least one other subset,
    • in the case where the current block is a block of the subset considered, other than the first block of the latter, the sub-means of entropy coding take account, for the entropy coding of the current block, of the probability of symbol occurrence which was calculated for at least one coded and decoded block belonging to the same subset.

In a corresponding manner, the invention also relates to a method of decoding a stream representative of at least one coded image, comprising the steps of:

    • identification in the image of a predetermined number of subsets of blocks to be decoded,
    • parallel decoding of parts of the stream that are associated with each of the subsets of blocks, the blocks of a subset considered being decoded according to a predetermined sequential order of traversal, the decoding step comprising, for a current block of a subset considered, the sub-steps of:
    • entropy decoding of the current block on the basis of at least one probability of symbol occurrence
    • predictive decoding of the current block with respect to at least one previously decoded block.

Such a decoding method is noteworthy in that:

    • in the case where the current block is the first block to be decoded of a subset considered, the probability of symbol occurrence is that which was calculated for a decoded predetermined block of at least one other subset,
    • in the case where the current block is a block of the subset considered, other than the first block of the latter, the probability of symbol occurrence is that which was calculated for at least one decoded block belonging to the same subset.

In a particular embodiment, the decoded block belonging to the same subset as the current block to be decoded other than the first block of the subset is that which is the nearest neighbor of the current block to be decoded.

In another particular embodiment, in the case where the predictive decoding of a block of a subset considered is intended to be performed with respect to a predetermined number of previously coded and decoded blocks of a subset other than the subset considered, the parallel decoding of the blocks of the subset considered is performed with a shift by the predetermined number of blocks with respect to the subset of blocks immediately preceding in the order in which the parallel decoding is performed.

Correlatively, the invention further relates to a device for decoding a stream representative of at least one coded image, comprising:

    • identification means for identifying in the image a predetermined number of subsets of blocks to be decoded,
    • means of parallel decoding of parts of the stream that are associated with each of the subsets of blocks, the blocks of a subset considered being decoded according to a predetermined sequential order of traversal, the decoding means comprising, for a current block of a subset considered:
    • sub-means of entropy decoding of the current block on the basis of at least one probability of symbol occurrence,
    • sub-means of predictive decoding of the current block with respect to at least one previously decoded block.

Such a decoding device is noteworthy in that:

    • in the case where the current block is the first block to be decoded of a subset considered, the sub-means of entropy decoding take account, for the entropy decoding of the first current block, of the probability of symbol occurrence which was calculated for a decoded predetermined block of at least one other subset,
    • in the case where the current block is a block of the subset considered, other than the first block of the latter, the sub-means of entropy decoding take account, for the entropy decoding of the current block, of the probability of symbol occurrence which was calculated for at least one decoded block belonging to the same subset.

The invention is also aimed at a computer program comprising instructions for the execution of the steps of the coding or decoding method hereinabove, when the program is executed by a computer.

Such a program can use any programming language, and be in the form of source code, object code, or of code intermediate between source code and object code, such as in a partially compiled form, or in any other desirable form.

Yet another subject of the invention is also aimed at a recording medium readable by a computer, and comprising computer program instructions such as mentioned hereinabove.

The recording medium can be any entity or device capable of storing the program. For example, such a medium can comprise a storage means, such as a ROM, for example a CD ROM or a microelectronic circuit ROM, or else a magnetic recording means, for example a diskette (floppy disk) or a hard disk.

Moreover, such a recording medium can be a transmissible medium such as an electrical or optical signal, which can be conveyed via an electrical or optical cable, by radio or by other means. The program according to the invention can be in particular downloaded on a network of Internet type.

Alternatively, such a recording medium can be an integrated circuit into which the program is incorporated, the circuit being adapted for executing the method in question or to be used in the execution of the latter.

The coding device, the decoding method, the decoding device and the computer programs aforementioned present at least the same advantages as those conferred by the coding method according to the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

Other characteristics and advantages will become apparent on reading two preferred embodiments described with reference to the figures in which:

FIG. 1A represents an image coding diagram of the prior art according to a first example,

FIG. 1B represents an image coding diagram of the prior art according to a second example,

FIG. 2A represents the main steps of the coding method according to the invention,

FIG. 2B represents in detail the parallel coding implemented in the coding method of FIG. 2A,

FIG. 3A represents an embodiment of a coding device according to the invention,

FIG. 3B represents a coding unit for the coding device of FIG. 3A,

FIG. 4A represents an image coding/decoding diagram according to a first preferential embodiment,

FIG. 4B represents an image coding/decoding diagram according to a second preferential embodiment,

FIG. 5A represents the main steps of the decoding method according to the invention,

FIG. 5B represents in detail the parallel coding implemented in the decoding method of FIG. 5A,

FIG. 6A represents an embodiment of a decoding device according to the invention,

FIG. 6B represents a decoding unit for the decoding device of FIG. 6A.

DETAILED DESCRIPTION OF AN EMBODIMENT OF THE CODING PART

An embodiment of the invention will now be described, in which the coding method according to the invention is used to code a sequence of images according to a binary stream close to that obtained by a coding according to the H.264/MPEG-4 AVC standard. In this embodiment, the coding method according to the invention is for example implemented in a software or hardware manner by modifications of a coder initially in accordance with the H.264/MPEG-4 AVC standard. The coding method according to the invention is represented in the form of an algorithm comprising steps C1 to C5, represented in FIG. 2A.

According to the embodiment of the invention, the coding method according to the invention is implemented in a coding device CO represented in FIG. 3A.

With reference to FIG. 2A, the first coding step C1 is the cutting of an image IE of a sequence of images to be coded into a plurality of blocks or macro-blocks MB, as represented in FIG. 4A or 4B. In the examples represented, said blocks MB have a square shape and all have the same size. As a function of the size of the image which is not necessarily a multiple of the size of the blocks, the last blocks on the left and the last blocks at the bottom may not be square. In an alternative embodiment, the blocks can be for example of rectangular size and/or not aligned with one another.

Each block or macroblock can moreover itself be divided into sub-blocks which are themselves subdividable.

Such a cutting is performed by a partitioning module PCO represented in FIG. 3A which uses for example a partitioning algorithm well known as such.

With reference to FIG. 2A, the second coding step C2 is the grouping of the aforementioned blocks into a predetermined number P of consecutive subsets of blocks SE1, SE2, . . . , SEk, . . . , SEP intended to be coded in parallel. In the examples represented in FIGS. 4A and 4B, the predetermined number P is equal to 4 and the four subsets SE1, SE2, SE3, SE4, represented dashed, consist respectively of the first four rows of blocks of the image IE.

Such a grouping is performed by a calculation module GRCO represented in FIG. 3A, with the aid of an algorithm well known per se.

With reference to FIG. 2A, the third coding step C3 consists in the coding in parallel of each of said subsets of blocks SE1, SE2, SE3 and SE4, the blocks of a subset considered being coded according to a predetermined sequential order of traversal PS. In the examples represented in FIGS. 4A and 4B, the blocks of a current subset SEk (1≦k≦4) are coded one after the other, from left to right, as indicated by the arrow PS.

Such a coding in parallel is implemented by a number R of coding units UCk (1≦k≦R) with R=4 as represented in FIG. 3A and allows a substantial acceleration of the coding method. In a manner known as such, the coder CO comprises a buffer memory MT which is adapted for containing the symbol occurrence probabilities such as progressively updated in tandem with the coding of a current block.

As represented in greater detail in FIG. 3B, each of the coding units UCk comprises:

    • a sub-unit for predictive coding of a current block with respect to at least one previously coded and decoded block, denoted SUCPk;
    • a sub-unit for entropy coding of said current block by using at least one probability of symbol occurrence calculated for said previously coded and decoded block, denoted SUCEk.

The predictive coding sub-unit SUCPk is able to perform a predictive coding of the current block, according to the conventional prediction techniques, such as for example in Intra and/or Inter mode.

The entropy coding sub-unit SUCEk is for its part of CABAC type, but modified according to the present invention, as will be described further on in the description.

As a variant, the entropy coding sub-unit SUCEk could be a Huffman coder known as such.

In the examples represented in FIGS. 4A and 4B, the first unit UC1 codes the blocks of the first row SE1, from left to right. When it reaches the last block of the first row SE1, it passes to the first block of the (N+1)st row, here the 5th row, etc. The second unit UC2 codes the blocks of the second row SE2, from left to right. When it reaches the last block of the second row SE2, it passes to the first block of the (N+2)nd row, here the 6th row, etc. This traversal is repeated until the unit UC4, which codes the blocks of the fourth row SE4, from left to right. When it reaches the last block of the first row, it passes to the first block of the (N+4)th row, here the 8th row, and so on and so forth until the last block of the image IE is coded.

Other types of traversal than that which has just been described hereinabove are of course possible. Thus, it is possible to cut the image IE into several sub-images and to independently apply a cutting of this type to each sub-image. It is also possible for each coding unit to process not nested rows, as explained hereinabove, but nested columns. It is also possible to traverse the rows or columns in either direction.

With reference to FIG. 2A, the fourth coding step C4 is the production of N sub-bitstreams Fk (1≦k≦N) representing the processed blocks compressed by each of the aforementioned coding units, as well as a decoded version of the processed blocks of each subset SEk. The decoded processed blocks of a subset considered, denoted SED1, SED2, . . . , SEDk, . . . , SEDP may be reused by some of the coding units UC1, UC2, . . . , UCk, . . . , UCP represented in FIG. 3A, according to a synchronization mechanism which will be detailed further on in the description.

With reference to FIG. 2A, the fifth coding step C5 consists in constructing a global stream F on the basis of the aforementioned sub-streams Fk. According to one embodiment, the sub-streams Fk are simply juxtaposed, with an extra information item intended to indicate to the decoder the location of each sub-stream Fk in the global stream F. The latter is transmitted by a communication network (not represented), to a remote terminal. The latter comprises the decoder DO represented in FIG. 6A.

Thus, as will be described in detail further on in the description, the decoder according to the invention is able to isolate the sub-streams Fk within the global stream F and to assign them to each component decoding unit of the decoder. It will be noted that such a decomposition of the sub-streams into global stream is independent of the choice of the use of several coding units operating in parallel, and that it is possible with this approach to have just the coder or just the decoder which comprises units operating in parallel.

Such a construction of the global stream F is implemented in a stream construction module CF, such as represented in FIG. 3A.

The various specific sub-steps of the invention, such as are implemented during the aforementioned parallel coding step C3, in a coding unit UCk, will now be described with reference to FIG. 2B.

In the course of a step C31, the coding unit UCk selects as current block the first block to be coded of a current row SEk represented in FIG. 4A or 4B.

In the course of a step C32, the unit UCk tests whether the current block is the first block (situated at the top and on the left) of the image IE which has been cut up into blocks in the aforementioned step C1.

If such is the case, in the course of a step C33, the coding probabilities are initialized to values Pinit previously defined in the coder CO of FIG. 3A.

If such is not the case, there is undertaken, in the course of a step C40 which will be described later in the subsequent description, the determination of the availability of the necessary previously coded and decoded blocks.

In the course of a step C34, there is undertaken the coding of the first current block MB1 of the first row SE1 represented in FIG. 4A or 4B. Such a step C34 comprises a plurality of sub-steps C341 to C348 which will be described hereinbelow.

In the course of a first sub-step C341, there is undertaken the predictive coding of the current block MB1 by known techniques of intra and/or inter prediction, in the course of which the block MB1 is predicted with respect to at least one previously coded and decoded block.

It goes without saying that other modes of intra prediction such as proposed in the H.264 standard are possible.

The current block MB1 can also be subjected to a predictive coding in inter mode, in the course of which the current block is predicted with respect to a block arising from a previously coded and decoded image. Other types of prediction are of course conceivable. Among the possible predictions for a current block, the optimal prediction is chosen according to a rate distortion criterion well known to the person skilled in the art.

Said aforementioned predictive coding step makes it possible to construct a predicted block MBp1 which is an approximation of the current block MB1. The information relating to this predictive coding will subsequently be written into the stream F transmitted to the decoder DO. Such information comprises especially the type of prediction (inter or intra), and if appropriate, the mode of intra prediction, the type of partitioning of a block or macroblock if the latter has been subdivided, the reference image index and the displacement vector used in the inter prediction mode. This information is compressed by the coder CO.

In the course of a following sub-step C342, there is undertaken the subtraction of the predicted block MBp1 from the current block MB1 to produce a residual block MBr1.

In the course of a following sub-step C343, there is undertaken the transformation of the residual block MBr1 according to a conventional operation of direct transformation such as for example a discrete cosine transformation of DCT type, to produce a transformed block MBt1.

In the course of a following sub-step C344, there is undertaken the quantization of the transformed block MBt1 according to a conventional quantization operation, such as for example a scalar quantization. A block of quantized coefficients MBq1 is then obtained.

In the course of a following sub-step C345, there is undertaken the entropy coding of the block of quantized coefficients MBq1. In the preferred embodiment, this entails a CABAC entropy coding.

In the course of a following sub-step C346, there is undertaken the dequantization of the block MBq1 according to a conventional dequantization operation, which is the operation inverse to the quantization performed in step C344. A block of dequantized coefficients MBDq1 is then obtained.

In the course of a following sub-step C347, there is undertaken the inverse transformation of the block of dequantized coefficients MBDq1 which is the operation inverse to the direct transformation performed in step C343 hereinabove. A decoded residual block MBDr1 is then obtained.

In the course of a following sub-step C348, there is undertaken the construction of the decoded block MBD1 by adding to predicted block MBp1 the decoded residual block MBDr1. It should be noted that the latter block is the same as the decoded block obtained on completion of the method of decoding the image IE which will be described further on in the description. The decoded block MBD1 is thus rendered available to be used by the coding unit UC1 or any other coding unit forming part of the predetermined number R of coding units.

On completion of the aforementioned coding step C34, the entropy coding sub-unit SUCEk such as represented in FIG. 3B contains all the probabilities such as progressively updated in tandem with the coding of the first block. These probabilities correspond to the various possible syntax elements and to the various associated coding contexts.

Subsequent to the aforementioned coding step C34, a test is performed, in the course of a step C35, to determine whether the current block is the jth block of this same row, were j is a known predetermined value of the coder CO which is at least equal to 1.

If such is the case, in the course of a step C36, the set of probabilities calculated for the jth block is stored in the buffer memory MT of the coder CO such as represented in FIG. 3A and in FIGS. 4A and 4B, the size of said memory being suitable for storing the calculated number of probabilities.

In the course of a step C37, the unit UCk tests whether the current block of the row SEk which has just been coded is the last block of the image IE.

If such is the case, in the course of a step C38, the coding method is ended.

If such is not the case, there is undertaken, in the course of step C39, the selection of the following block MBi to be coded in accordance with the order of traversal represented by the arrow PS in FIG. 4A or 4B.

If in the course of step C35, the current block is not the jth block of the row SEk considered, then step C37 hereinabove is undertaken.

In the course of a step C40, there is undertaken the determination of the availability of previously coded and decoded blocks which are necessary for coding the current block MBi. Having regard to the fact that this entails a parallel coding of the blocks of the image IE by different coding units UCk, it may be that these blocks were not coded and decoded by the coding unit assigned to the coding of these blocks and that they are therefore not yet available. Said determining step consists in verifying whether a predetermined number N′ of blocks situated in the previous row SEk-1, for example the two blocks situated respectively above and above to the right of the current block, are available for the coding of the current block, that is to say if they have already been coded and then decoded by the coding unit UCk-1 assigned to their coding. Said determining step also consists in verifying the availability of at least one block situated to the left of the current block to be coded MBi. However, having regard to the order of traversal PS chosen in the embodiment represented in FIG. 4A or 4B, the blocks are coded one after the other in a row SEk considered. Consequently, the left coded and decoded block is always available (with the exception of the first block of a row). In the example represented in FIG. 4A or 4B, this entails the block situated immediately to the left of the current block to be coded. For this purpose, only the availability of the two blocks situated respectively above and above to the right of the current block is tested.

This test step being liable to slow the coding method, in an alternative manner in accordance with the invention, a clock CLK represented in FIG. 3A is adapted for synchronizing the progress of the coding of the blocks so as to guarantee the availability of the two blocks situated respectively above and above to the right of the current block, without it being necessary to verify the availability of these two blocks. Thus, as represented in FIG. 4A or 4B, a coding unit UCk always begins to code the first block with a shift by a predetermined number N′ (here N′=2) of coded and decoded blocks of the previous row SEk-1 which are used for the coding of the current block. From a software point of view, the implementation of such a clock makes it possible to noticeably accelerate the processing time required to process the blocks of the image IE in the coder CO.

In the course of a step C41, a test is performed to determine whether the current block is the first block of the row SEk considered.

If such is the case, in the course of a step C42, there is undertaken the reading in the buffer memory MT solely of the symbol occurrence probabilities calculated during the coding of the jth block of the previous row SEk-1.

According to a first variant represented in FIG. 4A, the jth block is the first block of the previous row SEk-1 (j=1). Such a reading consists in replacing the probabilities of the CABAC coder with that present in the buffer memory MT. Dealing as one is with the first respective blocks of the second, third and fourth rows SE2, SE3 and SE4, this reading step is delineated in FIG. 4A by the arrows represented by thin lines.

According to a second variant of the aforementioned step C42 which is illustrated in FIG. 4B, the jth block is the second block of the previous row SEk-1 (j=2). Such a reading consists in replacing the probabilities of the CABAC coder with that present in the buffer memory MT. Dealing as one is with the first respective blocks of the second, third and fourth rows SE2, SE3 and SE4, this reading step is delineated in FIG. 4B by the arrows represented by thin dashed lines.

Subsequent to step C42, the current block is coded and then decoded by iteration of steps C34 to C38 described above.

If subsequent to the aforementioned step C41, the current block is not the first block of the row SEk considered, there is advantageously not undertaken the reading of the probabilities arising from the previously coded and decoded block which is situated in the same row SEk, that is to say the coded and decoded block situated immediately to the left of the current block, in the example represented. Indeed, having regard to the sequential traversal of reading PS of the blocks situated in the same row, as represented in FIG. 4A or 4B, the symbol occurrence probabilities present in the CABAC coder when beginning the coding of the current block are exactly that which are present after coding/decoding of the preceding block in this same row.

Consequently, in the course of a step C43, there is undertaken the learning of the probabilities of symbol occurrence for the entropy coding of said current block, which correspond solely to those which were calculated for said preceding block in the same row, as is represented by the double solid arrows in FIG. 4A or 4B.

Subsequent to step C43, the current block is coded and then decoded by iteration of steps C34 to C38 described above.

Detailed Description of an Embodiment of the Decoding Part

An embodiment of the decoding method according to the invention will now be described, in which the decoding method is implemented in a software or hardware manner by modifications of a decoder initially in accordance with the H.264/MPEG-4 AVC standard.

The decoding method according to the invention is represented in the form of an algorithm comprising steps D1 to D4, represented in FIG. 5A.

According to the embodiment of the invention, the decoding method according to the invention is implemented in a decoding device DO represented in FIG. 6A.

With reference to FIG. 5A, the first decoding step D1 is the identification in said stream F of the N sub-streams F1, F2, . . . , Fk, . . . , FP containing respectively the N subsets SE1, SE2, . . . , SEk, . . . , SEP of previously coded blocks or macro-blocks MB, as represented in FIG. 4A or 4B. For this purpose, each sub-stream Fk in the stream F is associated with an indicator intended to allow the decoder DO to determine the location of each sub-stream Fk in the stream F. In the example represented, said blocks MB have a square shape and all have the same size. As a function of the size of the image which is not necessarily a multiple of the size of the blocks, the last blocks on the left and the last blocks at the bottom may not be square. In an alternative embodiment, the blocks can be for example of rectangular size and/or not aligned with one another.

Each block or macroblock can moreover itself be divided into sub-blocks which are themselves subdividable.

Such an identification is performed by a stream extraction module EXDO such as represented in FIG. 6A.

In the example represented in FIG. 4A or 4B, the predetermined number is equal to 4 and the four subsets SE1, SE2, SE3, SE4 are represented dashed.

With reference to FIG. 5A, the second decoding step D2 is the decoding in parallel of each of said subsets of blocks SE1, SE2, SE3 and SE4, the blocks of a subset considered being coded according to a predetermined sequential order of traversal PS. In the example represented in FIG. 4A or 4B, the blocks of a current subset SEk (1≦k≦4) are decoded one after the other, from left to right, as indicated by the arrow PS. On completion of step D2, the subsets of decoded blocks SED1, SED2, SED3, . . . , SEDk, . . . , SEDP are obtained.

Such a decoding in parallel is implemented by a number R of decoding units UDk (1≦k≦R) with R=4 as represented in FIG. 6A and allows a substantial acceleration of the decoding method. In a manner known as such, the decoder DO comprises a buffer memory MT which is adapted for containing the probabilities of occurrence of symbols such as progressively updated in tandem with the decoding of a current block.

As represented in greater detail in FIG. 6B, each of the decoding units UDk comprises:

    • a sub-unit for entropy decoding of said current block by learning of at least one probability of symbol occurrence calculated for at least one previously decoded block, denoted SUDEk,
    • a sub-unit for predictive decoding of a current block with respect to said previously decoded block, denoted SUDPk.

The predictive decoding sub-unit SUDPk is able to perform a predictive decoding of the current block, according to the conventional prediction techniques, such as for example in Intra and/or Inter mode.

The entropy decoding sub-unit SUDEk is for its part of CABAL type, but modified according to the present invention, as will be described further on in the description.

As a variant, the entropy decoding sub-unit SUDEk could be a Huffman decoder known as such.

In the example represented in FIG. 4A or 4B, the first unit UD1 decodes the blocks of the first row SE1, from left to right. When it reaches the last block of the first row SE1, it passes to the first block of the (N+1)st row, here the 5th row, etc. The second unit UC2 decodes the blocks of the second row SE2, from left to right. When it reaches the last block of the second row SE2, it passes to the first block of the (N+2)nd row, here the 6th row, etc. This traversal is repeated until the unit UD4, which decodes the blocks of the fourth row SE4, from left to right. When it reaches the last block of the first row, it passes to the first block of the (N+4)th row, here the 8th row, and so on and so forth until the last block of the last identified sub-stream is decoded.

Other types of traversal than that which has just been described hereinabove are of course possible. For example, each decoding unit could process not nested rows, as explained hereinabove, but nested columns. It is also possible to traverse the rows or columns in either direction.

With reference to FIG. 5A, the third decoding step D3 is the reconstruction of an image decoded on the basis of each decoded subset SED1, SED2, . . . , SEDk, . . . , SEDP obtained in the decoding step D2. More precisely, the decoded blocks of each decoded subset SED1, SED2, . . . , SEDk, . . . , SEDP are transmitted to an image reconstruction unit URI such as represented in FIG. 6A. In the course of this step D3, the unit URI writes the decoded blocks in a decoded image as and when these blocks become available.

In the course of a fourth decoding step D4 represented in FIG. 5A, a fully decoded image ID is delivered by the unit URI represented in FIG. 6A.

The various specific sub-steps of the invention, such as are implemented during the aforementioned parallel decoding step D2, in a decoding unit UDk, will now be described with reference to FIG. 5B.

In the course of a step D21, the decoding unit UDk selects as current block the first block to be decoded of the current row SEk represented in FIG. 4A or 4B.

In the course of a step D22, the unit UDk tests whether the current block is the first block of the decoded image, in this instance the first block of the sub-stream F1.

If such is the case, in the course of a step D23, the decoding probabilities are initialized to values Pinit previously defined in the decoder DO of FIG. 6A.

If such is not the case, there is undertaken, in the course of a step D30 which will be described later in the subsequent description, the determination of the availability of the necessary previously decoded blocks.

In the course of a step D24, there is undertaken the decoding of the first current block MB1 of the first row SE1 represented in FIG. 4A or 4B. Such a step D24 comprises a plurality of sub-steps D241 to D246 which will be described hereinbelow.

In the course of a first sub-step D241, there is undertaken the entropy decoding of the syntax elements related to the current block. More precisely, the syntax elements related to the current block are decoded by the CABAC entropy decoding sub-unit SUDE1 such as represented in FIG. 6B. The latter decodes the sub-bitstream F1 of the compressed file to produce the syntax elements, and, at the same time, updates its probabilities in such a way that, at the moment at which this sub-unit decodes a symbol, the probabilities of occurrence of this symbol are identical to those obtained during the coding of this same symbol during the aforementioned entropy coding step C345.

In the course of a following sub-step D242, there is undertaken the predictive decoding of the current block MB1 by known techniques of intra and/or inter prediction, in the course of which the block MB1 is predicted with respect to at least one previously decoded block.

It goes without saying that other modes of intra prediction such as proposed in the H.264 standard are possible.

In the course of this step, the predictive decoding is performed with the aid of the syntax elements decoded in the previous step and comprising especially the type of prediction (inter or intra), and if appropriate, the mode of intra prediction, the type of partitioning of a block or macroblock if the latter has been subdivided, the reference image index and the displacement vector used in the inter prediction mode.

Said aforementioned predictive decoding step makes it possible to construct a predicted block MBp1.

In the course of a following sub-step D243, there is undertaken the construction of a quantized residual block MBq1 with the aid of the previously decoded syntax elements.

In the course of a following sub-step D244, there is undertaken the dequantization of the quantized residual block MBq1 according to a conventional dequantization operation which is the operation inverse to the quantization performed in the aforementioned step C344, to produce a decoded dequantized block MBDt1.

In the course of a following sub-step D245, there is undertaken the inverse transformation of the dequantized block MBDt1 which is the operation inverse to the direct transformation performed in step C343 hereinabove. A decoded residual block MBDr1 is then obtained.

In the course of a following sub-step D246, there is undertaken the construction of the decoded block MBD1 by adding to predicted block MBp1 the decoded residual block MBDr1. The decoded block MBD1 is thus rendered available to be used by the decoding unit UD1 or any other decoding unit forming part of the predetermined number N of decoding units.

On completion of the aforementioned decoding step D246, the entropy decoding sub-unit SUDE1 such as represented in FIG. 6B contains all the probabilities such as progressively updated in tandem with the decoding of the first block. These probabilities correspond to the various possible syntax elements and to the various associated decoding contexts.

Subsequent to the aforementioned decoding step D24, a test is performed, in the course of a step D25, to determine whether the current block is the jth block of this same row, where j is a known predetermined value of the decoder DO which is at least equal to 1.

If such is the case, in the course of a step D26, the set of probabilities calculated for the jth block is stored in the buffer memory MT of the decoder DO such as represented in FIG. 6A and in FIG. 4A or 4B, the size of said memory being suitable for storing the calculated number of probabilities.

In the course of a step D27, the unit UDk tests whether the current block which has just been decoded is the last block of the last sub-stream.

If such is the case, in the course of a step D28, the decoding method is ended.

If such is not the case, there is undertaken, in the course of step D29, the selection of the following block MBi to be decoded in accordance with the order of traversal represented by the arrow PS in FIG. 4A or 4B.

If in the course of the aforementioned step D25, the current block is not the jth block of the row SEDk considered, step D27 hereinabove is undertaken.

In the course of a step D30 which follows the aforementioned step D29, there is undertaken the determination of the availability of previously decoded blocks which are necessary for decoding the current block MBi. Having regard to the fact that this entails a parallel decoding of the blocks by different decoding units UDk, it may be that these blocks were not decoded by the decoding unit assigned to the decoding of these blocks and that they are therefore not yet available. Said determining step consists in verifying whether a predetermined number N′ of blocks situated in the previous row SEk-1, for example the two blocks situated respectively above and above to the right of the current block, are available for the decoding of the current block, that is to say if they have already been decoded by the decoding unit UDk-1 assigned to their decoding. Said determining step also consists in verifying the availability of at least one block situated to the left of the current block to be decoded MBi. However, having regard to the order of traversal PS chosen in the embodiment represented in FIG. 4A or 4B, the blocks are decoded one after the other in a row SEk considered. Consequently, the left decoded block is always available (with the exception of the first block of a row). In the example represented in FIG. 4A or 4B, this entails the block situated immediately to the left of the current block to be decoded. For this purpose, only the availability of the two blocks situated respectively above and above to the right of the current block is tested.

This test step being liable to slow the decoding method, in an alternative manner in accordance with the invention, a clock CLK represented in FIG. 6A is adapted for synchronizing the progress of the decoding of the blocks so as to guarantee the availability of the two blocks situated respectively above and above to the right of the current block, without it being necessary to verify the availability of these two blocks. Thus, as represented in FIG. 4A or 4B, a decoding unit UDk always begins to decode the first block with a shift by a predetermined number N′ (here N′=2) of decoded blocks of the previous row SEk-1 which are used for the decoding of the current block. From a software point of view, the implementation of such a clock makes it possible to noticeably accelerate the processing time required to process the blocks of each subset SEk in the decoder DO.

In the course of a step D31, a test is performed to determine whether the current block is the first block of the row SEk considered.

If such is the case, in the course of a step D32, there is undertaken the reading in the buffer memory MT solely of the symbol occurrence probabilities calculated during the decoding of the jth block of the previous row SEk-1.

According to a first variant represented in FIG. 4A, the jth block is the first block of the previous row SEk-1 (j=1). Such a reading consists in replacing the probabilities of the CABAC decoder with that present in the buffer memory MT. Dealing as one is with the first respective blocks of the second, third and fourth rows SE2, SE3 and SE4, this reading step is delineated in FIG. 4A by the arrows represented by thin lines.

According to a second variant of the aforementioned step D32 which is illustrated in FIG. 4B, the jth block is the second block of the previous row SEk-1 (j=2). Such a reading consists in replacing the probabilities of the CABAC decoder with that present in the buffer memory MT. Dealing as one is with the first respective blocks of the second, third and fourth rows SE2, SE3 and SE4, this reading step is delineated in FIG. 4B by the arrows represented by thin dashed lines.

Subsequent to step D32, the current block is decoded by iteration of steps D24 to D28 described above.

If subsequent to the aforementioned step D31, the current block is not the first block of the row SEk considered, there is advantageously not undertaken the reading of the probabilities arising from the previously decoded block which is situated in the same row SEk, that is to say the decoded block situated immediately to the left of the current block, in the example represented. Indeed, having regard to the sequential traversal of reading PS of the blocks situated in the same row, as represented in FIG. 4A or 4B, the symbol occurrence probabilities present in the CABAC decoder when beginning the decoding of the current block are exactly that which are present after decoding of the preceding block in this same row.

Consequently, in the course of a step D33, there is undertaken the learning of the probabilities of symbol occurrence for the entropy decoding of said current block, which probabilities correspond solely to those which were calculated for said preceding block in the same row, as represented by the double solid arrows in FIG. 4A or 4B.

Subsequent to step D33, the current block is decoded by iteration of steps D24 to D28 described above.

Claims

1. A computer-implemented method comprising:

receiving a stream representative of at least one coded image;
identifying, from the stream, a predetermined plurality of groups of blocks;
providing each group of blocks to a first decoding unit; and
processing, by the first decoding unit, a first block in a given group of blocks, wherein the processing of the first block comprises: determining that the first block is first in an order of blocks in the given group of blocks; in response to determining that the first block is first in the order of blocks in the given group of blocks, retrieving a first set of probability data from a buffer, wherein the first set of probability data comprises a first set of probabilities of occurrence of symbols associated with a block that is situated immediately adjacent to the first block and that belongs to another group of blocks that is different from the given group of blocks in the predetermined plurality of groups of blocks; entropy decoding the first block based on the first set of probability data; and
processing, by the first decoding unit, a second block in the given group of blocks, wherein the processing of the second block comprises: determining that the second block is not first in the order of blocks in the given group of blocks; in response to determining that the second block is not first in the order of blocks in the given group of blocks, retrieving a second set of probability data from a memory unit, wherein the second set of probability data comprises a second set of probabilities of occurrence of symbols associated with at least one other already decoded block belonging to the given group of blocks in the predetermined plurality of groups of blocks, wherein the second set of probabilities of occurrence of symbols are not associated with blocks that do not belong to the given subset of blocks; and entropy decoding the second block based on the second set of probability data.

2. The computer-implemented method of claim 1 comprising:

generating a third set of probability data based on the second set of probability data and data about the second block; and
replacing the second set of probability data in the memory unit with the third set of probability data.

3. The computer-implemented method of claim 1 wherein:

entropy decoding the first block based on the first set of probability data comprises entropy decoding the first block based on the first set of probability data using a context-adaptive binary arithmetic coding (CABAC) algorithm; and
entropy decoding the second block based on the second set of probability data comprises entropy decoding the second block based on the second set of probability data using the CABAC algorithm.

4. The computer-implemented method of claim 1 comprising:

generating a third set of probability data based on the first set of probability data and data about the first block;
storing the third set of probability data to the memory unit and
using at least the third set of probability data to generate the second set of probability data.

5. The computer-implemented method of claim 1 comprising:

predictive decoding of each of the plurality of blocks with respect to at least one previously decoded block.

6. The computer-implemented method of claim 1 comprising:

processing, by the first decoding unit, a third block in the given group of blocks, wherein the processing of the third block comprises: retrieving a third set of probability data from the memory unit, wherein the third set of probability data comprises a third set of probabilities of occurrence of symbols associated with blocks in the predetermined plurality of groups of blocks, and wherein the third set of probability data is determined based at least on the second set of probability data; and entropy decoding the third block based on the third set of probability data.

7. The method of claim 1,

wherein the processing of the second block comprises:
after entropy decoding the second block: determining a third set of probability data; determining whether the second block is ordered as a particular number in the order of blocks in the given group of blocks; in response to determining that the second block is ordered as the particular number in the order of blocks in the given group of blocks, storing the third set of probability data in the buffer, the stored third set of probability data for entropy decoding another block that is first in an order of blocks in another group of blocks; and in response to determining that the second block is not ordered as the particular number in the order of blocks in the given group of blocks, storing the third set of probability data in the memory unit the stored third set of probability data for entropy decoding, by the first decoding unit, another block that is next to the particular number in the order of blocks in the given group of blocks.

8. The method of claim 7, wherein the particular number is first in the order of blocks in the given group of blocks.

9. The method of claim 7, wherein the particular number is second in the order of blocks in the given group of blocks.

10. A system comprising:

one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising:
receiving a stream representative of at least one coded image;
identifying, from the stream, a predetermined plurality of groups of blocks;
providing each group of blocks to a first decoding unit; and
processing, by the first decoding unit, a first block in a given group of blocks, wherein the processing of the first block comprises: determining that the first block is first in an order of blocks in the given group of blocks; in response to determining that the first block is first in the order of blocks in the given group of blocks, retrieving a first set of probability data from a buffer, wherein the first set of probability data comprises a first set of probabilities of occurrence of symbols associated with a block that is situated immediately adjacent to the first block and that belongs to another group of blocks that is different from the given group of blocks in the predetermined plurality of groups of blocks; entropy decoding the first block based on the first set of probability data; and
processing, by the first decoding unit, a second block in the given group of blocks, wherein the processing of the second block comprises: determining that the second block is not first in the order of blocks in the given group of blocks; in response to determining that the second block is not first in the order of blocks in the given group of blocks, retrieving a second set of probability data from a memory unit, wherein the second set of probability data comprises a second set of probabilities of occurrence of symbols associated with at least one other already decoded block belonging to the given group of blocks in the predetermined plurality of groups of blocks, wherein the second set of probabilities of occurrence of symbols are not associated with blocks that do not belong to the given subset of blocks; and entropy decoding the second block based on the second set of probability data.

11. The system of claim 10, wherein the operations further comprise:

generating a third set of probability data based on the second set of probability data and data about the second block; and
replacing the second set of probability data in the memory unit with the third set of probability data.

12. The system of claim 10, wherein:

entropy decoding the first block based on the first set of probability data comprises entropy decoding the first block based on the first set of probability data using a context-adaptive binary arithmetic coding (CABAC) algorithm; and
entropy decoding the second block based on the second set of probability data comprises entropy decoding the second block based on the second set of probability data using the CABAC algorithm.

13. The system of claim 10, wherein the operations further comprise:

generating a third set of probability data based on the first set of probability data and data about the first block;
storing the third set of probability data to the memory unit and
using at least the third set of probability data to generate the second set of probability data.

14. The system of claim 10, wherein the operations further comprise:

predictive decoding of each of the plurality of blocks with respect to at least one previously decoded block.

15. The system of claim 10, wherein the operations further comprise:

processing, by the first decoding unit, a third block in the given group of blocks, wherein the processing of the third block comprises: retrieving a third set of probability data from the memory unit wherein the third set of probability data comprises a third set of probabilities of occurrence of symbols associated with blocks in the predetermined plurality of groups of blocks, and wherein the third set of probability data is determined based at least on the second set of probability data; and entropy decoding the third block based on the third set of probability data.

16. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising:

receiving a stream representative of at least one coded image;
identifying, from the stream, a predetermined plurality of groups of blocks;
providing each group of blocks to a first decoding unit; and
processing, by the first decoding unit, a first block in a given group of blocks, wherein the processing of the first block comprises: determining that the first block is first in an order of blocks in the given group of blocks; in response to determining that the first block is first in the order of blocks in the given group of blocks, retrieving a first set of probability data from a buffer, wherein the first set of probability data comprises a first set of probabilities of occurrence of symbols associated with a block that is situated immediately adjacent to the first block and that belongs to another group of blocks that is different from the given group of blocks in the predetermined plurality of groups of blocks; entropy decoding the first block based on the first set of probability data; and
processing, by the first decoding unit, a second block in the given group of blocks, wherein the processing of the second block comprises: determining that the second block is not first in the order of blocks in the given group of blocks; in response to determining that the second block is not first in the order of blocks in the given group of blocks, retrieving a second set of probability data from a memory unit, wherein the second set of probability data comprises a second set of probabilities of occurrence of symbols associated with at least one other already decoded block belonging to the given group of blocks in the predetermined plurality of groups of blocks, wherein the second set of probabilities of occurrence of symbols are not associated with blocks that do not belong to the given subset of blocks; and entropy decoding the second block based on the second set of probability data.

17. The non-transitory computer-readable medium of claim 16, wherein the operations further comprise:

generating a third set of probability data based on the second set of probability data and data about the second block; and
replacing the second set of probability data in the memory unit with the third set of probability data.

18. The non-transitory computer-readable medium of claim 16, wherein the operations further comprise:

generating a third set of probability data based on the first set of probability data and data about the first block; and
replacing the first set of probability data in the buffer with the third set of probability data.

19. The non-transitory computer-readable medium of claim 16, wherein:

entropy decoding the first block based on the first set of probability data comprises entropy decoding the first block based on the first set of probability data using a context-adaptive binary arithmetic coding (CABAC) algorithm; and
entropy decoding the second block based on the second set of probability data comprises entropy decoding the second block based on the second set of probability data using the CABAC algorithm.

20. The non-transitory computer-readable medium of claim 16, wherein the operations further comprise:

processing, by the first decoding unit, a third block in the given group of blocks, wherein the processing of the third block comprises: retrieving a third set of probability data from the memory unit, wherein the third set of probability data comprises a third set of probabilities of occurrence of symbols associated with blocks in the predetermined plurality of groups of blocks, and wherein the third set of probability data is determined based at least on the second set of probability data; and entropy decoding the third block based on the third set of probability data.
Referenced Cited
U.S. Patent Documents
6480537 November 12, 2002 Agrawal
7463781 December 9, 2008 Pearson
8319672 November 27, 2012 Kim
8520740 August 27, 2013 Flachs
8824813 September 2, 2014 Sasaki
8953692 February 10, 2015 Tanaka
20100098155 April 22, 2010 Demircin
20100135416 June 3, 2010 Huang
20120014433 January 19, 2012 Karczewicz
20120082215 April 5, 2012 Sze
20120207213 August 16, 2012 Amon
20120219066 August 30, 2012 Amonou
20120263232 October 18, 2012 Pateux
20120293343 November 22, 2012 Lee
20130016771 January 17, 2013 Misra
20130021350 January 24, 2013 Schmit et al.
20130187798 July 25, 2013 Marpe et al.
20140010312 January 9, 2014 Song
20140016700 January 16, 2014 Henry et al.
20140254665 September 11, 2014 Henry
20150016524 January 15, 2015 Henry et al.
20150195537 July 9, 2015 Henry et al.
20150195538 July 9, 2015 Henry et al.
Foreign Patent Documents
WO 2010/063184 June 2010 WO
WO 2010063184 June 2010 WO
WO2011042645 April 2011 WO
Other references
  • Huang et al., “Ordered Entropy Slices for Parallel CODEC,” 38th VCEG Meeting, 89 MPEG Meeting, Jan. 7, 2009-Aug. 7, 2009, London, Geneva (Video Coding Experts Group of ITU-T SG. 16), pp. 1-10 (Jul. 5, 2009).
  • Marpe et al., “Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard,” IEEE Transactions on Circuits and Systems for Video Technology, IEEE Service Center, Piscataway, NJ, US, vol. 13(7), pp. 620-636 (Jul. 1, 2003).
  • NTT Docomo, Inc. et al., “Annex A CDCM Video Codec: Decoder Specification,” Joint Collaborative Team on Video Coding (JCT-VG) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 1st Meeting: Dresden, DE, Apr. 15-23, 2010, pp. 1-107 (Apr. 2010).
  • Rissanen et al., “Universal Modeling and Coding,” IEEE Transactions on Information Theory, vol. IT-27(1 ), pp. 12-23 (Jan. 1981).
  • Sharp, “Entropy slices for parallel entropy decoding,” ITU-I SG16 Meeting, Apr. 22, 2008-May 2, 2008, Geneva, pp. 1-9 (Apr. 14, 2008).
  • Sze et al., “Massively Parallel CABAG,” 38th VCEG Meeting, 89 MPEG Meeting, Jul. 1, 2009-Jul. 8, 2009, London, Geneva (Video Coding Experts Group of ITU-T SG. 16), pp. 1-10 (Jul. 3, 2009).
  • Fuldseth et al., “Tiles,” 5th JCT-VC Meeting, Geneva, Mar. 2011 (JCTVC-E408), 14 pages.
  • Misra and Segall, “Periodic Initialization for Wavefront Coding Functionality,” 4th JCT-VC Meeting, Daegu, Jan. 2011 (JCTVC-D073), 7 pages.
  • Misra et al., “Lightweight Slicing for Entropy Coding,” 4th JCT-VC Meeting, Daegu, Jan. 2011, (JCTVC-D070), 7 pages.
  • Sze and Budagavi, “Analysis of entropy slice approaches,” 4th JCT-VC Meeting, Daegu, Jan. 2011 (JCTVC-D243), 6 pages.
  • Office Action issued in U.S. Appl. No. 14/662,395 on May 7, 2015, 19 pages.
  • Office Action issued in U.S. Appl. No. 14/492,750 on Aug. 7, 2015, 11 pages.
  • Office Action issued in U.S. Appl. No. 14/128,949 on Aug. 31, 2015, 11 pages.
  • Office Action issued in U.S. Appl. No. 14/662,395 on Aug. 31, 2015, 12 pages.
  • Office Action issued in U.S. Appl. No. 14/662,463 on Sep. 1, 2015, 12 pages.
  • Office Action issued in U.S. Appl. No. 14/662,463 on May 6, 2015, 20 pages.
  • Bossen, “Common Test Conditions and Software Reference Configurations,” 5th JCT-VC Meeting, Geneva, Mar. 2011 (JCTVC-E700), 11 pages.
  • Henry and Pateux, “Wavefront Parallel Processing,” 5th JCT-VC Meeting, Geneva, Mar. 2011 (JCTVC-E196), 9 pages.
  • Li, “Image Compression—The Mechanics of the JPEG 2000,” http://research.microsoft.com/en-us/um/people/jinl/papers2002/ms . . . , 20 pages.
  • Marpe et al., “Improved CABAC”, 15. VCEG Meeting; 58; MPEG Meeting; Dec. 4-6, 2011; Pattaya, Th; (Video Coding Experts Group of ITU-T SG.16), No. VCEG-018, Nov. 28, 2001, pp. 1-6, XP030003351.
  • Schwarz H. et al., CABAC and Slices 4. JVT Meeting; Jul. 22-26, 2002; Klagenfurt, Austria; (Joint Video Team of ISO/IEC MPEG & ITO/T VCEG—ISO/IEC JTC1/SC29/WG11 and ITU-T SG. 16 0.6), No. JVT-D020r1 Jul. 22, 2002, pp. 1-17, XP002671680.
  • Wiegand et al., “Overview of the H.264/AVC Video Coding Standard,” IEEE Transactions on Circuits and Systems for Video Technology, Jul. 2003, 13(7):560-576.
  • International Search Report issued in PCT/FR2012/050380 on May 8, 2012, with English translation, 7 pages.
  • Written Opinion of the International Searching Authority issued in PCT/FR2012/050380 on May 8, 2012, with English translation, 11 pages.
  • International Preliminary Report on Patentability issued in PCT/FR2012/050380 on Sep. 13, 2013, 6 pages (in French).
  • International Search Report issued in PCT/FR2012/051391, on Aug. 9, 2012, with English translation, 7 pages.
  • Written Opinion issued in PCT/FR2012/051391, on Aug. 9, 2012, with English Translation, 14 pages.
  • Office Action issued in U.S. Appl. No. 14/128,949 on Apr. 10, 2015, 19 pages.
  • Office Action issued in U.S. Appl. No. 14/492,750 on Apr. 13, 2015, 17 pages.
Patent History
Patent number: 9271012
Type: Grant
Filed: Sep 22, 2014
Date of Patent: Feb 23, 2016
Patent Publication Number: 20150010088
Assignee: DOLBY INTERNATIONAL AB (Amsterdam Zuidoost)
Inventors: Felix Henry (Saint Gregoire), Stephane Pateux (Rennes)
Primary Examiner: Mohammed Rahaman
Application Number: 14/492,546
Classifications
Current U.S. Class: To Or From Variable Length Codes (341/67)
International Classification: H04N 19/25 (20140101); H04N 19/51 (20140101); H04N 19/50 (20140101); H04N 19/13 (20140101); H04N 19/61 (20140101); H04N 19/91 (20140101); H04N 19/174 (20140101); H04N 19/436 (20140101);