PROCESS AND DEVICE FOR CODING BY LUMINANCE ESTIMATION

The invention relates to a process for coding digital data from a sequence of video images carrying out a coding of the difference in luminance (7) between an image segment to be coded and a corresponding segment of an image predicted from a so-called reference image, characterized in that the prediction is made as a function of a luminance compensation (12) of values of luminance of the reference image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

[0001] The technical field to which the present invention relates is that of the coding of digital video image sequences. The current problem within this field is that of compressing the visual information by exploiting a set of parameters for regulating the cost and quality of this compression. It is thus possible to comply with a certain number of constraints which depend essentially on the intended application. Quality criteria may compel the information to be compressed without impairing its final reconstruction. Such is the case, for example, in teledetection or production applications and also storage applications. On the other hand, the quality requirement may be less demanding, although greater than a required minimum, giving way to the need to compress the information in accordance with the capacities of a given transmission channel. Such is, for example, the case in videophone applications (over ISDN standing for Integrated Services Digital Network or STN standing for Switched Telephone Network), or communication applications on battlefields. Finally, the most common case amounts to balancing the degradations in quality with the performance in compression. Such is the case in broadcasting applications, or again video distribution applications (video on Compact Disc such as Digital Video Disc). Added to this there are constraints of a practical nature.

[0002] The main coding methods may be viewed as being a combination of several techniques used on the basis of their properties.

[0003] Thus, there may be distinguished:

[0004] coding by prediction which consists on the one hand in providing an estimate and on the other hand in correcting it by taking into account the estimation error;

[0005] coding by transform which enables the information to be made more concise, by decorrelating it through a change of representation space;

[0006] coding by approximation which replaces information with other predefined or at least simplified information.

[0007] The latest developments in this field rely on prediction through motion. On the one hand the Discrete Cosine Transformation (DCT) makes it possible locally to reduce the redundancy of an image in intra mode. On the other hand the objective of motion compensation is to reduce temporal redundancy. The motion information corresponds to the local similitude ties between a so-called “reference” image and that currently being investigated, and is interpreted through the concept of the physical motion of the underlying scene. Thus, consider a partition of an image into blocks; for each of these blocks, a search is made in the other image investigated for the most similar block. The motion (horizontal, vertical) is then the difference in location between the blocks (line-wise, column-wise) in relation to the location of the first, and is coded once per block. This motion information is supplemented with the data regarding the residues from this prediction (values of the error in the prediction through motion) . This is processed in a manner similar to the data of intra images.

[0008] The most recent schemes for coding image sequences exploit the motion data through a prediction. The MPEG2 approach is a good example of this.

[0009] The improvements obtained in data compression may however be deemed inadequate and better image quality may be desired for a given bit rate or a smaller bit rate, and hence coding cost, for a given final quality.

[0010] The invention which is the subject of the present patent application is aimed at remedying the aforementioned drawbacks.

[0011] To this end, the subject of the invention is a process for coding digital data from a sequence of video images carrying out a coding of the difference in luminance between an image segment to be coded and a corresponding segment of an image predicted from a so-called reference image, characterized in that the prediction is made as a function of a luminance compensation of values of luminance of the reference image.

[0012] Its subject is also a device for coding digital data from a video sequence for the implementation of the process comprising a motion estimation circuit for calculating vectors of motion between two images, a motion compensation circuit based on a reference image for calculating a predicted image, a subtractor for subtracting the predicted image from the current image for calculating a residue to be coded, characterized in that it comprises a circuit for estimating luminance between the two same images and a luminance compensation circuit receiving the information from the luminance estimation and motion estimation circuit for calculating the predicted image.

[0013] Its subject is also a device for decoding digital data coded according to the preceding process, for calculating an image reconstructed from a residue and a predicted image, characterized in that it comprises a circuit for compensating luminance as a function of luminance vectors which are luminance estimation information items, for calculating the predicted image.

[0014] The process consists in enhancing the prediction through motion compensation, with a so-called luminance compensation.

[0015] It also makes it possible to replace an intra image by a so-called auto-compensated image using the same mechanism as inter compensation based on motion, luminance and residue data.

[0016] The proposed invention makes it possible to reduce the amount of information contained in the residue, irrespective of the method of motion analysis used before the estimation of luminance. The enhancing of the model of motion by a so-called luminance approach significantly reduces the residue, and this reduction in residual information being larger than the corresponding addition of luminance information, the novel coding of inter images is more powerful.

[0017] Moreover, the invention proposes a unified coding scheme where the difference between the approach for intra images and that for inter images is small. The implementation of the invention is simplified and hence cheaper owing to the fact that it utilizes elements of existing schemes differently, by incorporating therewith a method which is of low complexity from a hardware point of view.

[0018] The invention is independent of the particular coding of each type of information item (motion, luminance, residue). The method used can be incorporated into the MPEG2 coding schemes, even though the data regarding luminance is coded in addition to the standard data.

[0019] These luminance compensation techniques are especially effective when there is a change of scene lighting which, in the prior art, would customarily give rise to expensive intra coding.

[0020] Other features and advantages of the invention will emerge clearly in the following description given by way of non-limiting example and offered in conjunction with the appended figures which represent:

[0021] FIG. 1 a diagram of the coding circuit according to the invention;

[0022] FIG. 2 a diagram of the decoding circuit according to the invention.

[0023] The process according to the invention is described below with the aid of the diagram of the device represented in FIG. 1.

[0024] The coding device comprises a first and a second input. An image is presented to the first input at the instant t and an image is presented to the second input at the instant t+1. The first input is linked in parallel with the input of a filtering circuit 1, with a first input of a switch with two inputs and one output 2 and with a first input of a switch with two inputs and one output 3. The output of the filtering circuit 1 is linked to the input of a sub-sampling circuit 4 and the output of the latter is linked to the second input of the switch 2.

[0025] The second input of the device is linked to the second input of the switch 3.

[0026] The switches 2 and 3 are two-position switches, position 1 being represented by broken lines in the figure and corresponding to a first input and position 2 by solid lines corresponding to a second input. Thus, position 1 of the switch 2 corresponds to its first input and position 1 of the switch 3 corresponds to its second input. All the switches described hereafter have two positions 1 and 2, these switches being simultaneously in the same position 1 or 2 represented in the diagram by broken and solid lines respectively.

[0027] The outputs of the switches 2 and 3 are linked to two inputs of a motion and luminance estimation circuit 5 as well as, respectively, to a first input of a switch with two inputs and one output 6 corresponding to position 1 and to a first input of an adder circuit 7. A first output of the luminance and motion estimation circuit is linked to the input of a circuit for coding motion 8 and a second output is linked to the input of a circuit for coding luminance 9. The respective outputs S2 and S3 of each of the coding circuits correspond to the outputs of the coding device described. They are also linked respectively to a motion decoding circuit 10 and to a luminance decoding circuit 11, the output of each of these circuits being respectively linked to a first and second input of a motion and luminance compensation circuit 12. The output of the switch 6 is linked to a third input of this circuit 12. The output of the motion and luminance compensation circuit is linked to the input of a switch with one input and two outputs 13. A first output of this switch, corresponding to position 2, is linked to the input of an iteration calculation circuit 14, a first output of this circuit being for its part linked to a first input of a switch with two inputs and one output 15 and corresponding to position 2 and a second output of this circuit being linked to the input of a filter 16. The output of the filter 16 is linked to the second input of the switch 6 by way of a sub-sampling circuit 17. The second output of the switch 13 is linked to the second input of the switch 15. The output of the switch 15 is linked to the second input of the adder circuit 7. The output of this circuit is linked to the input of a residue coding circuit 18 whose output S1 is the third output of the coding device.

[0028] The digital images I(t) and I(t+1), corresponding to the instants t and t+1, are presented to the input of the device. These are for example two successive images in a sequence of images.

[0029] The first mode of coding described is the inter mode of coding corresponding to position 1 of the switches, as represented by broken lines in the figure.

[0030] In this first mode, the image I(t) is transmitted on the first input of the motion and luminance estimator 5 via the switch 2 and the image I(t+1) is transmitted on the second input of this circuit 5 by way of the switch 3. The circuit 5 therefore carries out an estimation of motion between the image I(t) and the image I(t+1) and calculates a motion vector in accordance with a conventional method such as, for example, block matching. This method, in which motion estimation is performed per image block, utilizes the least squares method, for example.

[0031] Estimated values of luminance are calculated for each of the image blocks with the aid of estimation parameters (or prediction parameters), cs and bo as explained later and as a function of the motion estimation calculated by this circuit.

[0032] This so-called luminance compensation appears as a supplement to the motion compensation. It is independent of the latter and of the nature of the motion results. In fact the luminance compensation fully exploits, by statistics, the ties discovered through the motion between two blocks or two neighbourhoods. In practice, this method of compensation estimates a posteriori the linear relation existing between the grey levels of a block or of a neighbourhood and those of the other block or neighbourhood previously associated through the motion analysis.

[0033] The formulation of the motion analysis problem may be as follows:

[0034] Consider an element of a reference image I such as a point (x,y,z), neighbourhood, block, region etc. It is required to associate another element of the same kind in the image investigated I′, point (x′,y′,z′), neighbourhood, block, region etc. which complies with correspondence criteria utilizing known methods such as the method of least squares, the gradient method etc. and interpreted by the concept of motion.

[0035] The notation z and z′ denotes the values of grey levels which correspond respectively to the coordinates (x,y) and (x′,y′) in the images I and I′. When the transformation associated with the motion is chosen to be linear (modelling of linear 2D motion), the parameters to be estimated may be described as follows: 1 ( x ′ y ′ z ′ ) = [ a b 0 c d 0 0 0 1 ] ⁢   ·   ⁢ ( x y z ) + ( t x t y 0 ) ( 1 )

[0036] The parameters (a,b,c,d) are then associated with the rotations, and the parameters (tx,ty,0) with the translations. In this formulation, the rotations can in fact be ignored, returning to the expression of the first motion models.

[0037] The formulation of the luminance analysis problem can be as follows: starting from the preceding data (data regarding correspondence between elements of two images, which defines, as it were, pairs of image elements), it is required to estimate coefficients complementary to those of the motion and which will be associated with the grey levels. In practice, they make it possible to transform the grey levels of an image element so as to predict those of the associated element (hence the name luminance compensation). One method of estimating these values is that of least squares. Other solutions can be used and this example is in no way limiting. When the luminance transformation is chosen to be linear, the associated coefficients may be termed contrast scaling, cs, and brightness offset, bo. The following illustration takes account of this choice.

[0038] Thus the values a,b,c,d are regarded as known, as are tx and ty. 2 ( x ′ y ′ z ′ ) = [ a b e c d f g h c s ] · ( x y z ) + ( t x t y b o ) ( 2 )

[0039] It therefore remains to estimate e,f,g,h and cs and bo. The first ones correspond to the correlation between the grey levels and the spatial positioning. They are generally very close to zero and they may therefore be ignored a priori and we can take e=f=g=h=0.0. Hence, finally, cs and bo remain to be estimated, corresponding in effect to luminance compensation. Thus the working matrix equation becomes: 3 ( x ′ y ′ z ′ ) = [ a b 0 c d 0 0 0 c s ] · ( x y z ) + ( t x t y b o ) ( 3 )

[0040] The values of cs and bo relate to an image block, a pixel or a region depending on whether motion compensation is performed on an image block, a pixel or a region. The luminance compensation is therefore performed on the same image zone as the motion compensation.

[0041] In the ideal case where the motion is estimated perfectly and the assumptions of negligible lighting effects hold true (such assumptions, a model with no variation in luminance of the scene, are indeed made in the conventional utilization of motion compensation), or in the case in which the luminance compensation is ignored, cs and bo are equal to 1.0 and 0.0. In practice it turns out that either the motion is not estimated perfectly, for example the motion of the edges of objects when the motion is estimated per block, or the lighting effects assumptions are not entirely valid. Furthermore, it is simple to verify which statistical correlation exists for example between 2 matched-up elements (blocks or the like). The results indicate clearly the utility of having luminance compensation by estimating cs and bo.

[0042] Nevertheless, a qualitative remark may already be made in this regard. A blurring effect, smoothing of the grey levels during luminance compensation, may be observed in particular in the zones with steep gradient or when the estimated motion includes a slight shift with respect to the actual motion. Thus, in this case, the luminance compensation is performed on pixels which do not correspond perfectly to the actual motion of the scene on account of this error. This may constrain its use according to the application, even if the residue error always remains smaller with the luminance compensation (for example for slowing down images where shifts in the estimated motion are observed fairly frequently, but which nevertheless allow very good interpolation of images). In this case, the criterion for deciding whether or not to use the luminance data for a block or given neighbourhood or a region, must be associated with a qualitative aspect of the residue. By transmitting the residue data it is of course possible to delete this blurring effect on the predicted image by luminance compensation, but the amount of information to be transmitted is related directly to this blurring effect. The decision criterion, such as, for example, the calculation of the energy in an image block, makes it possible to solve this problem by determining the most suitable mode of coding from among those which exist.

[0043] One possible estimator is that of least squares. Owing to the linear transformation assumption, this amounts to performing a linear regression between the 2 sets of values made up from the grey levels of each element of the matched-up pair. Let (&phgr;u) be the set of grey levels of element E1 of image I, and let (&psgr;u) be the set of grey levels of element E2 of image I′. It turns out that E and E2 have been matched up by the motion estimator. Next, it is required to determine an estimate of cs and bo such that we have:

E{(&psgr;uest−&psgr;u)2} minimum   (4)

[0044] with

&psgr;−est=cs·&phgr;u+bo   (5)

[0045] E, according to the terms used in statistics, corresponds to the mean value and the expression (4) therefore signifies that the mean value of the square of the differences over the block is minimized.

[0046] More concretely, consider an image block E1 of the current image comprising n pixels, the pixel in line i and column j having the luminance value Pi,j. With this block there is associated a predicted block E2 related on the basis of the calculated motion vector. Let qk,l be the actual value of the luminance of pixel k,l in line k and column l of this image block E2, the pixel matched up with pixel i,j by motion estimation (rotation and translation or translation alone according to MPEG2) and {circumflex over (q)}k,l the predicted value.

[0047] We have: 4 E ⁢ ( ϕ u ) = 1 n ⁢ ∑ i , j ⁢ p i , j E ⁢ ( ψ u ) = 1 n ⁢ ∑ k ⁢   ⁢ l ⁢ q k , l

[0048] We seek cs and bo such that: 5 1 n ⁢ ∑ ( q ^ k , l - q k , l ) 2 ⁢   ⁢ minimum with q ^ k , l = c s ⁢ p i , j + b 0

[0049] By calculating statistical data over the matched elements E1 and E2 it is therefore possible to estimate cs and bo. In this case the following results are obtained: 6 c s · ( E ⁢ { ϕ u 2 } - E ⁢ { ϕ u } 2 ) = ( E ⁢ { ϕ u · ψ u } - E ⁢ { ϕ u } · E ⁢ { ψ u } ) ( 6 )

[0050] and 7 b o = E ⁢ { ψ u } - c s · E ⁢ { ϕ u } ( 7 )

[0051] The predicted blocks utilized for calculating the residues will then be calculated as a function of the luminance value of the current block of the current image and of the values cs and bo calculated for this current block by the luminance and motion estimation circuit 5.

[0052] The motion vector information calculated by the motion estimation circuit 5 is transmitted to the circuit for coding these motion vectors 8 and the luminance information such as the parameters cs and bo is transmitted to the luminance coding circuit 9.

[0053] This information is coded and then transmitted, via the outputs S2 and S3 of the device, to a decoder or to be multiplexed with the residue data, likewise coded, available on the output S1 of the device, in the case in which only a single link is desired with each decoder, for example within the MPEG2 framework. In the latter case, the multiplexer, not represented in the figure, retrieves all the information available on the outputs S1, S2, S3, of the device described, and incorporates it and transmits it in a conventional manner in a data stream or “bitstream” to the set of corresponding decoders.

[0054] The motion and luminance compensation circuit 12 retrieves the decoded information cs and bo as well as the decoded motion vectors output by the motion and luminance decoders so as to calculate predicted images. In this way, the motion and luminance information utilized by the motion compensation circuit takes account of the quantification interval used for the coding and is the same as that used by a compensation circuit utilized on the image decoder side, the motion and luminance decoding circuits being chosen to be identical on the image decoder side.

[0055] The compensation circuit receives, in inter mode, the image I(t) on its input. A predicted image is calculated from this image, the motion vectors and the parameters cs and bo, and is transmitted from the output of the circuit to the subtractor 7. The image I(t) is received on the first input of this subtractor and the predicted image from the compensation circuit is subtracted so as to yield a residue at the output of the subtractor. These residue data are then coded in a conventional manner by utilizing, for example, the discrete cosine transform and are then transmitted on the output S1 and, as appropriate, multiplexed with the previously described motion and luminance information.

[0056] The intra mode of coding corresponds to the case in which an image I must be decoded independently of the images which precede it. This second mode of coding corresponds to position 2 of the switches, as represented by solid lines in the figure. The image I(t) is thus transmitted, in this mode, to a filter 1 which carries out linear filtering and into a sub-sampler 4 which performs a sub-sampling, for example by 2, so as to yield an image I′(t). The degree of sub-sampling may equally well be fixed a priori or fixed each time the operation occurs. In the latter case, the value must necessarily be known (stored or transmitted) to the decoder. This image, for the processing to be followed, must have the same size as I and zero values are in fact appended spatially to the sampled image, still by way of this circuit 4, so as to yield I′(t). The image obtained is transmitted on the input of the luminance and motion estimator. This circuit receives the image I(t) on its second input and carries out an estimation of motion M between image I and the associated image I′ or more precisely a correlation, interpreted as a motion, between image I and the associated image I′. After this motion estimation which ideally is a zoom motion, a luminance estimation is made on the basis of the 2 images, and the motion data obtained.

[0057] As stated previously, one of the aspects of the invention is the utilization of the inter approach vis-à-vis the intra image. To do this, motion, luminance and residue data are therefore used. In fact the approach is motivated by the fixed point theorem of Banach (and hence is akin to the fractal techniques and IFS described, for example, in the technical article by M. H. Hayes entitled Iterated Function Systems for image and video coding of volume XLV of May-June 1994 of the Journal on Communications).

[0058] To give one example, the current image is divided into image blocks and then sampled and filtered and this filtered image is divided into image blocks of the same size as the blocks of the current image. A correlation is then performed between a current block of the current image and all the blocks or the blocks in the neighbourhood of the current block (for example belonging to a search window) of the filtered image.

[0059] Reconstruction of the intra image is achieved by cumulated compensation by applying the Banach theorem as explained below.

[0060] According to fractal theory, reconstruction may be interpreted as simply a motion and luminance compensation which should theoretically be repeated an infinity of times. To do this, it uses the results of image representation by IFS. In practice, convergence is fairly rapidly achieved, and the number of iterations amounts to between 3 and 10 at most. The same arbitrary or empirical choice must be fixed both on the coder and decoder side, so as to determine the residue during coding, and so as to have coherent results during image reconstruction.

[0061] As in the inter mode, the motion information (motion vectors) and luminance information (luminance vectors with components cs and bo) is coded so as to be mutiplexed with the coded residue data, and is then decoded so as to be transmitted to the motion and luminance compensation circuit 12. In intra mode, the circuit receives on its third input an image calculated during a preceding iteration, except for the first iteration, rather than the image I(t). A first image is calculated from the motion and luminance information and is transmitted as output to a circuit for calculating the number of iterations and for routing 14 which retransmits the calculated image which it receives to the input of the compensation circuit across a filter 16 and a sub-sampler 17. After a given number of iterations the image thus reconstructed is the predicted image which is sent to the subtractor 7, by way of this circuit 14, so as to be subtracted from the image I(t).

[0062] The output from the luminance and motion compensation circuit is therefore linked to the input by way of a loop, thus allowing several successive compensations denoted c( ), associated with an operator f( ) consisting of a linear filtering of low-pass type and with a sub-sampling, for example a spatial division by two. The compensation c( ) relies just as much upon a motion compensation as upon a luminance compensation.

[0063] The “repeated” compensation consists in carrying out the following algorithm (with I0 arbitrary and the number of iterations being fixed at 8): 1 initialization I = I0 ith iteration (i < 9): Ii = c(f(Ii−1)) 8th and last iteration I8 = c(f(I7))

[0064] I8 is the reconstructed image.

[0065] Whereupon, it is apparent that the “customary” compensation consists of a “repeated” compensation for which:

[0066] the number of iterations is equal to 1

[0067] f( ) is replaced by the identity

[0068] I0 is the reference image R.

[0069] This repeated compensation therefore makes it possible to calculate the prediction of I as a function of the decoded luminance and motion information. Thus the intra image can be processed like an inter image. There is an important consequence as regards the hardware realization in which the layout employed is the same for the intra images as for the inter images. The common points are then the following at coder level:

[0070] estimation of motion and associated coding;

[0071] luminance estimation according to the motion results and associated coding;

[0072] luminance and motion compensation on the basis of a reference image or the image arising from a linear filtering and sub-sampling and which may be repeated iteratively (intra case), so as to calculate the reconstructed image;

[0073] calculation of the residue by subtraction and associated coding.

[0074] FIG. 2 represents a decoding device or decoder according to the invention. In this figure the same references are adopted from the circuits common to the coder and to the decoder.

[0075] The motion and luminance information transmitted by the outputs S2 and S3 of the previously described coder are received on a first and second input E2 and E3 of the device. This information is transmitted respectively to the inputs of the motion 10 and luminance 11 decoding circuits of the type used at the coder. The outputs of these circuits are transmitted on a first and second input of a motion and luminance compensation circuit 12 of the type used at the coder. The third input of the decoder, E1, receives the residue data transmitted to a residue decoding circuit 19 and the output of the latter is linked to a first input of an adder 20. The output from this adder, which is also the output S from the decoder, is transmitted on a first input of a switch with two positions 6 corresponding to position 1, that is to say to the inter mode. The output of this switch is linked to a third input of the motion and luminance compensation circuit 12. The output of this circuit is linked to the input of a second switch with two positions 13. The first output of this switch which corresponds to the inter mode is linked to a first input of a third switch with two positions 15, also corresponding to the inter mode. The output of this third switch is linked to the second input of the adder circuit 20.

[0076] In intra mode, the input of the second switch 13 is linked to its second output, itself linked to the input of an iteration and routing calculation circuit 14. A first output of this circuit is linked to the second input of the switch 15. The second output is linked to the second input of the switch 6 by way of a filtering circuit 16 and of a sub-sampling circuit 17 placed in series.

[0077] The processing operations are here very similar to those for the coding. The motion, luminance and residue data transmitted by the coder and received respectively on the inputs E2, E3 and E1 are decoded by way of the decoding circuits 10, 11, 19 which carry out the operations inverse to those performed by the corresponding coding circuits 8, 9, 18 in the coder.

[0078] The inter mode decoding utilizes a reference image R. The luminance and motion compensation circuit 12 identifies the stored reference image R to be used (this may be predefined) which it compensates in terms of motion and luminance on the basis of the decoded motion and luminance information so as to yield a predicted image on its output. The residue decoded by the circuit 19 is appended to this image, by way of the adder 20 which thus yields the reconstructed image on its output. This image is the one available at the output S of the device described. This image is also the one which is returned as input to the compensation circuit and is possibly chosen as reference image for the inter decoding of a following image.

[0079] The intra mode decoding does not utilize a reconstructed image as reference image. It starts from an arbitrary reference image I0 created by the circuit 12 or residing in a memory of the circuit 12 (which may of course be different from the reference image used by the coder). A first iteration is carried out on the basis of this image by traversing the filtering circuit 16, the sample circuit 17 and then the circuit 12 which carries out motion and luminance compensation as a function of the data transmitted by the coder, so as to yield a new image I1 and so on. The image I7, restricting ourselves to 7 iterations at coder level, is the reconstructed image transmitted to the adder by way of the iterations and routing calculation circuit 14.

[0080] The decoding device is described here with three inputs but it is obvious that, when the luminance and motion data are multiplexed with the coded residue data, a demultiplexer at the input of the decoder, not represented in the figure, is given the job of sorting these data so as to send them to the corresponding inputs E1, E2 or E3 of the decoder.

[0081] The points which are common to the coding and decoding devices are as follows:

[0082] motion-associated decoding;

[0083] luminance-associated decoding;

[0084] luminance and motion compensation based on a reference image (which may be a blank image memory in the starting intra case), and which may be iteratively repeated (intra case) by being associated with the following point;

[0085] linear filtering and sub-sampling depending on the case (intra/inter);

[0086] residue-associated decoding, and addition to the reconstruction by compensation.

[0087] In general, it will be noted that no assumption has been made regarding the existing motion field. It may therefore be block-wise, region-wise, dense, more or less accurate, obtained by the method of least squares or by the gradient method.

[0088] This invention therefore makes it possible to improve the existing schemes for coding digital image sequences which are based on motion compensation. It uses existing processing operations (estimation, motion compensation; coding of the representation data; processing of the residue), whilst being defined independently of them.

[0089] It is particularly well suited to the novel methods of coding by region or zone which no longer utilize the residue data but the prediction data alone. It will be noted that the criterion for deciding whether or not to use the luminance data for a block or given neighbourhood or region can be associated with a qualitative aspect of the residue, that is to say with the degree of utilization of the residue in the envisaged application. One can in fact envisage, for example according to the novel techniques of coding by region or zone, transmitting the residue data only for certain images, by sampling for one image every n images or else not transmitting this information at all, the decoders then utilizing only the prediction information.

Claims

1. Process for coding digital data from a sequence of video images carrying out a coding of the difference in luminance (7) between a current image segment to be coded and a corresponding segment of an image predicted from a so-called reference image, characterized in that the prediction is made as a function of a luminance compensation (12) of values of luminance of the reference image.

2. Process according to

claim 1, characterized in that the coding of the segment of the image to be coded is an inter coding between the current image I(t+1) and a preceding image I(t) and in that this preceding image is the reference image.

3. Process according to

claim 2, characterized in that the luminance compensation (12) is performed on the basis of a luminance estimation (5) as a function of luminance values of pixels belonging to the segment of the current image to be coded and on those belonging to the segment of the preceding image matched up (5) by way of a motion vector calculated on the basis of a motion estimator (5).

4. Process according to

claim 1, characterized in that the coding of the segment of the image to be coded is an intra coding of the current image I(t) and in that the predicted image is calculated by successive iterations on the basis of a reference image I0, each iteration including a filtering (16), a sub-sampling (17), and a motion and luminance compensation (12).

5. Process according to

claim 4, characterized in that a motion vector is calculated by correlation between the segment of the current image to be coded I(t) and a filtered image I′(t) obtained from filtering (1) and sub-sampling (4) this segment of current image I(t) and in that the luminance compensation (12) is performed on the basis of a luminance estimation (5) dependent on luminance values of pixels belonging to the segment of the current image to be coded and on those belonging to the segment of the image matched up by way of the motion vector.

6. Process according to

claim 3 characterized in that the estimation of the luminance (5) comprises the calculation of a predicted luminance {circumflex over (q)} for the pixels of the segment of the current image to be coded of actual luminance q, predicted luminance chosen to be linearly dependent on the luminance p for the pixels of the segment of the image matched up by way of the motion vector, of the form {circumflex over (q)}=cs p+bo, the calculation of the coefficients cs and bo being performed by minimizing the sum, over the pixels relevant to the segment of the image to be coded, of the difference relating to the values of {circumflex over (q)} and of q.

7. Process according to the preceding

claim 6, characterized in that the image segment is an image block and in that the luminance vectors with components cs and bo are calculated at image block level.

8. Process according to

claim 6 characterized that the calculation of the coefficients is performed on the basis of the method of least squares.

9. Device for coding digital data from a video sequence for the implementation of the process according to

claim 1 comprising a motion estimation circuit (5) for calculating vectors of motion between two images, a motion compensation circuit (12) based on a reference image for calculating a predicted image, a subtractor (7) for subtracting the predicted image from the current image for calculating a residue to be coded, characterized in that it comprises a circuit (5) for estimating luminance between the two same images and a luminance compensation circuit (12) receiving the information from the luminance estimation and motion estimation circuit (5) for calculating the predicted image.

10. Coding device according to

claim 9, characterized in that the luminance estimation circuit (5) estimates a luminance of the pixels of a segment of the current image to be coded as a function of the luminances of the pixels of the segment of the image matched up by way of the motion vector and as a function of the difference in luminance between the two image segments taken globally.

11. Device for decoding digital data coded according to the process of

claim 1, for calculating an image reconstructed from a residue and a predicted image, characterized in that it comprises a circuit (12) for compensating luminance as a function of luminance vectors which are luminance estimation information items, for calculating the predicted image.

12. Decoding device according to

claim 11, characterized in that, in an inter mode, the luminance and motion compensation circuit (12) calculates the predicted image as a function of a preceding reconstructed image, of motion vectors and of luminance vectors.

13. Decoding device according to

claim 11, characterized in that, in an intra mode, the output from the luminance and motion compensation circuit (12) is fed back to its input by way of a filter (16) and a sub-sampler (17) so as to calculate the predicted image by successive iterations of a reference image (I0), an iteration comprising filtering, sub-sampling and compensation as a function of the luminance vectors and motion vectors.

14. Signals for transmitting compressed data obtained by coding residue data items calculated by differencing between the luminance values of a predicted image and a current image, the motion vector field forming part of the data transmitted, characterized in that they also comprise a field of luminance vectors calculated from luminance values for the image segments matched up by way of the motion vector field.

Patent History
Publication number: 20010016076
Type: Application
Filed: Feb 9, 1998
Publication Date: Aug 23, 2001
Inventor: JEAN-CHRISTOPHE DUSSEUX (RENNES)
Application Number: 09020524
Classifications
Current U.S. Class: Gray Level To Binary Coding (382/237)
International Classification: G06K009/36; G06K009/46;