Restoration of Color Components in an Image Model

- NOKIA CORPORATION

This invention relates to a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component. In the method, the degradation information of each colour component of the image is found and is used for improving image quality. The degradation information of each colour component is specified by a point-spread function. Each colour component is restored by the degradation function. The image can be unprocessed image data. The invention also relates to several alternatives for implementing the restoration, and for controlling and regularizing the inverse process independently of the image degradation. The invention also relates to a device, to a module, to a system and to a computer program products and to a program modules.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application is for entry into the U.S. national phase under §371 for International Application No. PCT/FI05/050001 having an international filing date of Jan. 4, 2005, and from which priority is claimed under all applicable sections of Title 35 of the United States Code including, but not limited to, Sections 120, 363 and 365(c). This application is also a continuation-in-part of U.S. patent application Ser. No. 10/888,534, filed on Jul. 9, 2004, from which domestic priority is claimed.

FIELD OF THE INVENTION

This invention relates to image processing and particularly to a restoration of colour components in a system for storage or acquisition of digital images.

BACKGROUND OF THE INVENTION

Blurring or degradation of an image can be caused by various factors, e.g. out-of-focus optics, or any other aberrations that result from the use of a wide-angle lens, or the combination of inadequate aperture value, focal length and lens positioning. During the image capture process, when long exposure times are used, the movement of the camera, or the imaged subject, can result in motion blurring of the picture. Also, when short exposure time is used, the number of photons being captured is reduced, this results in high noise levels, as well as poor contrast in the captured image.

Various methods for reconstructing images that contain defects are known from related art. Defect block in the image can be replaced with the average of some of all of the surrounding blocks. One example is to use three blocks that are situated above the defect. Further there is a method called “best neighbours matching” which restores images by taking a sliding block the same size as the defect region and moves it through the image. At each position, except for ones where the sliding block overlaps the defect, the pixels around the border of the sliding block are placed in a vector. The pixel values around the border of the defect are placed in another vector and the mean squared error between them is computed. The defect region is then replaced by the block that has the lowest border-pixel.

For example spatial error concealment techniques attempt to hide a defect by forming a good reconstruction of the missing or corrupted pixels. One of the methods is to find a mean of the pixels in an area surrounding the defect and to replace the defect with the mean pixel value. A requirement for the variance of the reconstruction can be added to equal the variance of the area around the defect.

Different interpolation methods can also be used in the image reconstruction process. For example a bilinear interpolation can be applied to pixels on four corners of the defect rectangle. This makes a linear, smooth transition of pixel values across the defect area. Bilinear interpolation is defined by the pixel value being reconstructed, pixels at corners of the reconstructed pixel and a horizontal and vertical distance from the reconstructed pixel to the corner pixels. Another method is edge-sensitive nonlinear filtering, which interpolates missing samples in an image.

The purpose of image restoration is to remove those degradations so that the restored images look as close as possible to the original scene. In general, if the degradation process is known; the restored image can be obtained as the inverse process of the degradation. Several methods to solve for this inverse mathematical problem are known from the prior art. However, most of these techniques do not consider the image reconstruction process in the modelling of the problem, and assume simplistic linear models. Typically, the solutions in implementations are quite complicated and computationally demanding.

Image restoration generally involves two important steps, the deblurring and noise filtering steps. Some approaches for deblurring are known from related art. These approaches can be categorized into non-iterative and iterative techniques. In the non-iterative methods, the solution is obtained through a one pass processing algorithm, e.g. Laplacian high pass filtering, unsharp masking, or frequency domain inverse filtering. In the iterative methods, the result is refined during several processing passes. The de-blurring process is controlled by a cost function that sets the criteria for the refining process, e.g. Least Squares method or adaptive Landweber algorithm. Usually, after a few iterations, there is not much improvement between adjacent steps. The continuation of the de-blurring algorithm beyond a certain point might introduce annoying artefacts into the restored image, such as e.g. overshooting of the edges due to over-emphasis of the details or even false colouring. Another approach to solve the de-blurring problem is to apply iteratively a one step de-blurring method with varying parameters and the best result is kept (blind deconvolution).

The methods from related art are typically applied in restoration of images in high-end applications such as astronomy and medical imaging. Their use in consumer products is limited, due to the difficulty of quantifying the image gathering process and the typical complexity and computational power needed to implement these algorithms. Some of the approaches have been used in devices that have limited computational and memory resources. The methods from the related art are typically designed as a post-processing operation, which means that the restoration is applied to the image, after it has been acquired and stored. In a post-processing operation each colour component has a different point spread function that is an important criteria that can be used to evaluate the performance of imaging systems. If the restoration is applied as post-processing, the information about the different blurring in each colour component is not relevant anymore. The exact modelling of the image acquisition process is more difficult and (in most cases) is not linear. So the “inverse” solution is less precise. Most often, the output of the digital cameras is compressed to jpeg-format. If the restoration is applied after the compression (which is typically lossy), the result can amplify unwanted blocking artefacts.

SUMMARY OF THE INVENTION

The aim of this invention is to provide an improved way to restore images. This can be achieved by a method, a model, use of a model, a de-blurring method, a device, a module, a system, program modules and computer program products.

According to present invention the method for forming a model for improving image quality of a digital image captured with an imaging module comprises at least imaging optics and an image sensor, where the image is formed through the imaging optics, said image consisting of at least one colour component, wherein degradation information of each colour component is found, an image degradation function is obtained and said each colour component is restored by said degradation function.

According to present invention also the model for improving image quality of a digital image is provided, said model being obtainable by a claimed method. According to the present invention also use of the model is provided.

Further according to present invention the method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor is provided, where the image is formed through the imaging optics, said image consisting at least of one colour component, wherein degradation information of each colour component of the image is found, a degradation function is obtained according to the degradation information and said each colour component is restored by said degradation function.

Further according to present invention a method for restoration of an image is provided, wherein the restoration is implemented by an iterative restoration function where at each iteration a de-blurring method with regularization is implemented.

Further according to present invention a system for determining a model for improving image quality of a digital image with an imaging module is provided, said module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, said image consisting of at least one colour component, wherein the system comprises first means for finding degradation information of each colour component of the image, second means for obtaining a degradation function according to the degradation information, and third means for restoring said each colour component by said degradation function.

Further according to present invention the imaging module is provided, comprising imaging optics and an image sensor for forming an image through the imaging optics onto the light sensitive image sensor wherein a model for improving image quality is related to said imaging module. Further according to present invention a device comprising an imaging module is provided.

In addition, according to present invention the program module for improving an image quality in a device is provided, comprising an imaging module, said program module comprising means for finding degradation information of each colour component of the image, obtaining a degradation function according to the degradation information, and restoring said each colour component by said degradation function. According to present invention also other program module for a restoration of an image is provided, comprising means for implementing a de-blurring with regularization at each iteration of an iterative restoration.

Further the computer program product is provided, comprising instructions for finding degradation information of each colour component of the image, obtaining a degradation function according to the degradation information, and restoring said each colour component by said degradation function. According to the present invention also a computer program product for a restoration of an image is provided, comprising computer readable instruction for implementing a de-blurring with regularization at each iteration of an iterative restoration.

Other features of the invention are described in appended dependent claims.

In the description a term “first image model” corresponds to such an image, which is already captured with an image sensor, such as a CCD (Charged Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), but not processed in any way. The first image model is raw image data. The second image model is the one for which a degradation information has been determined. It will be appreciated that other sensor types, other than CMOS or CCD can be used with the invention.

The first image model is used for determining the blurring of the image, and the second image model is restored according to the invention. The restoration can also be regulated according to the invention. After these steps have been done, other image reconstruction functions can be applied to it. If considering the whole image reconstruction chain, the idea of the invention is to apply the restoration as a pre-processing operation, whereby the following image reconstruction operations will benefit from the restoration. Applying the restoration as a pre-processing operation means that the restoration algorithm is targeted directly to the raw colour image data and in such a manner, that each colour component is handled separately.

With the invention the blurring caused by optics can be reduced significantly. The procedure is particularly effective if fixed focal length optics is used. The invention is also applicable to varying focal length systems, in which case the processing considers several deblurring functions from a look-up table depending on the focal position of the lenses. The deblurring function can also be obtained through interpolation from look-up tables. One possibility to define the deblurring function is to use continuous calculation, in which focal length is used as a parameter to deblurring function. The resulting images are sharper and have better spatial resolution. It is worth mentioning that the proposed processing is different from traditional sharpening algorithms, which can also result in sharper images with amplified high frequencies. In fact, this invention presents a method to revert the degradation process and to minimize blurring, which is caused e.g. by optic, whereas the sharpening algorithms use generic high-pass filters to add artefacts to an image in order to make it look sharper.

The model according to the invention is more viable for different types of sensors that can be applied in future products (because of better fidelity to the linear image formation model). In the current approach, the following steps and algorithms of the image reconstruction chain benefit from the increased resolution and contrast of solution.

Applying the image restoration as a pre-processing operation may minimize non-linearities that are accumulated in the image capturing process. The invention also may prevent over-amplification of colour information.

The data restoration sharpens the image by iterative inverse filtering. This inverse filtering can be controlled by a controlling method that is also provided by the invention. Due to the controlling method, the iteration is stopped when the image is sharp enough. The controlling method provides a mechanism to process differently the pixels that are at different locations into the image. According to this, the overshooting in the restored image can be reduced thus giving a better visual quality of the final image. In addition, pixels that are located at edges in the observed image are restored differently than the pixels that are located on smooth areas. The controlling method can address the problem of spatial varying point spread function. For example if point spread function of the optical system is different for different pixel coordinates, restoration of the image using independent processing of the pixels can solve this problem. Further, the controlling method can be implemented with several de-blurring algorithms in order to improve their performances.

The invention can also be applied for restoration of video.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention is illustrated with reference to examples in accompanying drawings and following description.

FIG. 1 illustrates an example of the system according to the invention,

FIG. 2 illustrates another example of the system according to the invention,

FIG. 3 illustrates an example of a device according to the invention,

FIG. 4 illustrates an example of an arrangement according to the invention, and

FIG. 5 illustrates an example of an iterative restoration method and a controlling method according to the invention.

DETAILED DESCRIPTION OF THE INVENTION

This invention relates to a method for improving image quality of a digital image captured with an imaging module comprising at least imaging optics and an image sensor, where the image is formed through the imaging optics, the image consisting of at least one colour component. In the method, the degradation information of each colour component of the image is found and is used for improving image quality. The degradation information of each colour component is specified by a point-spread function. Each colour component is restored by said degradation function. The image can be unprocessed image data. The invention also relates to several alternatives for implementing the restoration, and for controlling and regularizing the inverse process.

The description of the restoration of images according to the invention can be targeted to three main points, wherein at first the blur degradation function is determined, e.g. by measuring a point-spread function (PSF) for at least one raw colour component. Secondly, a restoration algorithm is designed for at least one raw colour component. Thirdly, a regularization mechanism can be integrated to moderate the effect of high pass filtering. In the description the optics in mobile devices are used as an example, because they may generally be limited to a wide focus range. It will, however, be apparent to the man skilled in the art, that the mobile devices are not the only suitable devices. For example the invention can be utilized by digital cameras, web cameras or similar devices, as well as by high-end applications. The aim of this algorithm is to undo or attenuate a degradation process (blurring) resulting from the optics. Due to the algorithm the resulting images becomes sharper and have an improved resolution.

Wherever a term “colour component” is used, it relates to various colour systems. The example in this invention is RGB-system (red, green, blue), but a person skilled in the art will appreciate other systems such as HSV (Hue, Saturation, Value), YUV (Luminance, chrominance) or CMYK (Cyan, Magenta, Yellow, Black) etc.

The image model in the spatial domain can be described as:


gi(m,n)=hi(u,v)*fi(m,n)+n(m,n)   (1)

where gi is a measured colour component image, fi is an original colour component, hi is a corresponding linear blurring in the colour component and ni is an additive noise term. gi, fi, ni are defined over an array of pixels (m, n) spanning the image area, whereas hi is defined on the pixels (u, v) spanning blurring (point-spread function) support. The index i={1,2,3,4} denotes respectively the data concerning colour components, such as red, green1, blue and green2 colour components.

The invention is described in more detail by means of FIGS. 1 and 2 each illustrating a block diagram of the image restoration system according to the invention.

Blur Specification

The procedure for estimating the degradation (FIG. 1, 110) in the image that has been captured by an optical element (100) is described next. As can be seen in FIG. 2, the degradation can be estimated by means of the point-spread function 210 corresponding to the blur in three colour channels (in this example R, G, B) (raw data). The point-spread functions are used to show different characteristics for each colour channel. The point-spread function is an important criterion that can be used to evaluate the performance of imaging systems.

The point-spread function changes as a function of the wavelength and the position in the camera field of view. Because of that, finding a good point-spread function may be difficult. In the description an out-of-focus close range imaging and a space invariant blurring are assumed. The practical procedure for estimating the point-spread function (hi) that is associated with each colour component, can also be used as stand-alone application to help in the evaluation process of camera systems.

Given a blurred image corresponding to one colour component of a checker-board pattern, the four outer corner points are located manually, and first a rough estimate of the corner positions is determined. The exact locations (at subpixel accuracy) are recalculated again by refining the search within a square window of e.g. 10×10 pixels. Using those corner points, an approximation for the original grid image fi can be reconstructed by averaging the central parts of each square and by asserting a constant luminance value to those squares.

The point-spread function is assumed to be space invariant, whereby the blur can be calculated through a pseudo-inverse filtering method (e.g. in Fourier domain). Since the pseudo-inverse technique is quite sensitive to noise, a frequency low-pass filter can be used to limit the noise and the procedure can be applied with several images to obtain an average estimate of the point-spread function. (The normalized cut-off frequency of the mentioned low pass filter is around 0.6, but at least any value from 0.4 to 0.9 may be applicable).

In order to quantify the extent of blur that occurs with each colour channel, a simple statistics is defined, which statistics is determined as a mean of the weighted distance from the centre of the function (in pixels), said weight corresponding to the value of the normalized point-spread function at that point:

S psf ( h i ) = M 1 N 1 m , n h i ( m , n ) m = 0 M 1 n = 0 N 1 ( m 2 + n 2 ) h i ( m , n ) ( 2 )

wherein M1 and N1 are the support of the point-spread function filter. Spsf describes the extent of the blurring. Experiments confirm that the channels have different blurring patterns. For example when studying Mirage-1 camera, the obtained Spsf values were:

S psf ( h i ) = { 5 , 42 i = 1 ( red ) 5 , 01 i = 2 ( green ) 4 , 46 i = 3 ( blue )

It can be seen from the results, that the red component was most blurred and noisy, whereby the least blurred was the blue component, which also had the least contrast.

Restoration Algorithm

The data concerning colour components is measured by a sensor 120 e.g. by Bayer sensor 220 (in FIG. 2), like a CMOS or CCD sensor. The colour component can be red (R), green1(G1) blue (B) and green2 (G2) colour components as illustrated in FIG. 2. Each of these colour “images” is quarter size of the final output image.

The second image model is provided for to be restored (130; 250). The images are arranged lexicographically into vectors, and the point-spread function hi is arranged into a block-Toeplitz circulant matrix Hi. The second image model is then expressed as:


gi=Hi fi+ ηi   (3)

Having a reasonable approximation of Hi the purpose of image restoration is to recover the best estimate fi from the degraded observation gi. The blurring function Hi is non-invertible (it is already defined on a limited support, so its inverse will have infinite support), so a direct inverse solution is not possible. The classical direct approach to solving the problem considers minimizing the energy between input and simulated re-blurred image, this is given by the norm:


JLS =∥ gi−Hi {circumflex over (f)}i2   (4)

thus providing a least squares fit to the data. The minimization of the norm also leads to the solution of the maximum-likelihood, when the noise is known to be Gaussian. It also leads to the generalized inverse filter, which is given by:


(HTH) {circumflex over (f)}i=HT gi   (5)

In order to solve for this, it is common to use deterministic iterative techniques with the method of successive approximations, which leads to following iteration:

f _ ^ i ( 0 ) = μ H T g _ i f _ ^ i ( k + 1 ) = f _ ^ i ( k ) + μ H T ( g _ i - f _ ^ i ( k ) ) ( 6 )

This iteration converges, if

0 < μ < 2 λ max ,

where λmax is the largest eigenvalue of the matrix HTH. The iteration continues until the normalized change in energy becomes quite small.

It can be seen from FIGS. 1 and 2 that the restoration (130; 250) is made separately for each of the colour components R, G, B.

The main advantages of iterative techniques are that there is no need to explicitly implement the inverse of the blurring operator and that the restoration process could be monitored as it progresses.

The last squares can be extended to classical least squares (CLS) technique. When spoken theoretically, the problem of image restoration is ill-posed, i.e. a small perturbation in the output, for example noise, can result in an unbounded perturbation of the direct least squares solution that is presented above. For this reason, the constrained least squares method is usually considered in the literatures. These algorithms minimize the term in equation (4) subject to the (smoothness) regularization term, which consists of a high-pass filtered version of the output. The regularization term permits the inclusion of prior information about the image.

One Example of Regularization Mechanism

In practise, the image sensor electronics, such as CCD and CMOS sensors, may introduce non-linearities to the image, of which the saturation is one of the most serious. Due to non-linearities unaccounted for in the image formation model, the separate processing of the colour channels might result in serious false colouring around the edges. Hence the invention introduces an improved regularization mechanism (FIG. 2; 240) to be applied to restoration. The pixel areas being saturated or under-exposed are used to devise a smoothly varying coefficient that moderates the effect of high-pass filtering in the surrounding areas. The formulation of the image acquisition process is invariably assumed to be a linear one (1). Due to the sensitivity difference of the three colour channels, and fuzzy exposure controls, pixel saturation can happen incoherently in each of the colour channels. The separate channel restoration near those saturated areas results in over-amplification in that colour component alone, thus creating artificial colour mismatch and false colouring near those regions. To avoid this, a regularization mechanism according to the invention is proposed. The regularization mechanism is integrated in the iterative solution of equation (6). The idea is to spatially adapt μ in order to limit the restoration effect near saturated areas. The adapted step size is given as follows:


μadap(m,n)=βsat(u,m)μ  (9)

where μ is the global step-size as discussed earlier, and βsat is the local saturation control that modulates the step size. βsat is obtained using the following algorithm:

    • for each colour channel image gi, i={1 . . . 4},
    • consider the values of the window (w×w) surrounding the pixel location gi(m, n),
    • count the number of saturated pixels Si(m,n) in that window.
    • The saturation control is given by the following equation:

β sat ( m , n ) = max ( 0 , ( w 2 - i = 1 4 S i ( m , n ) ) / w 2 ) .

βsat varies between 0 and 1 depending on the number of saturated pixels in any of the colour channels.

Another Example of Iterative Restoration Method and a Regularization Mechanism

The previous data restoration sharpens the image by iterative inverse filtering. This inverse filtering can be controlled by a controlling method whereby the iteration is stopped when the image is sharp enough. A basic idea of this controlling method is illustrated in FIG. 5 as a block chart. At the beginning of the method the image is initialized equal with the observed image, and the parameters of the de-blurring algorithm are set up (510). After this, the de-blurring algorithm is applied to the observed image. This can be any of the existing one pass algorithms such as unsharp masking method, blur domain de-blurring, differential filtering, etc. (520). The de-blurring is meaningful at every iteration, because if the de-blurring does not have good performances the overall performance of the system will not be that good. In the next step (530), pixels from the de-blurred image can be checked to detect the overshooting such as over-amplified edges. The following step (540) the restored image is updated. If a pixel location in the de-blurred image corresponds to an overshoot edge, it is not any further updated in the iterative process. Otherwise, the pixels from the restored image are normally updated. Also, the pixels that correspond to overshooting are marked such that in the next iterations the corresponding restored pixels are unchanged (for those pixels the restoration process is terminated at this point). In the next step (550) the intermediate output image is scanned and the pixels that still contain overshooting are detected. If persistent overshooting is detected (560) the global iterative process is stopped and the restored image is returned. Otherwise the parameters of the de-blurring algorithm are changed (570) and the next iteration is started with the de-blurring of the observed image. The last procedure (560-570) makes the algorithm suitable for blind deconvolution. The algorithm disclosed here prevents the restored image from overshooting that appears due to over-amplification of edges. This is done in two different ways. First, at each iteration, the pixels are updated separately such that the ones that are degraded are not updated into the restored image. Second, the whole de-blurring process is stopped if there is a pixel in the restored image that is too much degraded. Detailed description of implementation of the de-blurring method is discussed next.

The method steps of FIG. 5 are done for one of the colour components R, G, B. The other two components are processed separately exactly in the same manner. If the YUV colour system is used, only component Y needs to be processed.

At the step 510 the image is initialized equal with the observed image, and the parameters of the deblurring algorithm are set up. The input observed image is denoted here by I and the final restored image is denoted by Ir. The restored image Ir is initialized with I (Ir=I) at the beginning. The parameters of the de-blurring method are also initialized. For instance, if the unsharp masking method is used for de-blurring the number of blurred images used and their parameters are chosen. If another algorithm is implemented, its parameters will be set up at this point. A matrix of size equal with the size of the image and with unity elements is initialized. The matrix is denoted by mask.

At the step 520 the de-blurring algorithm is applied to the observed image and the de-blurred image Idb is obtained. At the step 530 every pixel from the deblurred image is checked to detect the overshooting such as over-amplified edges. The pixels from the de-blurred image Idb are scanned and the horizontal and vertical differences between adjacent pixels can be computed as follows:


dh1(x,y)=Idb(x, y)−Idb(x, y−1)


dh2(x,y)=Idb(x, y)−Idb(x, y+1)


dv1(x,y)=Idb(x, y)−Idb(x−1, y)


dv2(x,y)=Idb(x, y)−Idb(x+1, y)

where x, y represents the vertical and horizontal pixel coordinates respectively. Also the pixels from the observed image are scanned and the horizontal and vertical differences between the adjacent pixels can be computed as follows:


dh3(x,y)=I(x, y)−I(x, y−1)


dh4(x,y)=I(x, y)−I(x, y+1)


dv3(x,y)=I(x, y)−I(x−1, y)


dv4(x,y)=I(x, y)−I(x+1, y)

It is checked for every pixel from the de-blurred image whether the sign of the corresponding differences dh1 and dh3, dh2 and dh4, dv1 and dv3, and dv2 and dv4 are different. If they are different it means that the pixel at coordinates x, y contain overshooting. This checking can be carried out by the following algorithm:

if NOT[sign(dh1(x,y))=sign(dh3(x,y))] OR NOT[sign(dh2(x,y))=sign(dh4(x,y))] if [abs(dh1(x,y))>=th1*MAX] AND [abs(dh2(x,y))>=th1*MAX] mh=0; end end if NOT[sign(dv1(x,y))=sign(dv3(x,y))] OR NOT[sign(dv2(x,y))=sign(dv4(x,y))] if [abs(dv1(x,y))>=th1*MAX] AND [abs(dv2(x,y))>=th1*MAX] mv=0; end end if (mh=0) OR (mv=0) mask(x,y)=0; end

Basically the idea is that for every pixel from the de-blurred image the local shape of the de-blurred image is compared with the local shape of the observed image. This is done by comparing the signs of the corresponding differences from the two images in horizontal and also in vertical direction. When a difference in the shape of the two images is found (whether in horizontal or vertical direction), this means that the corresponding pixel from the de-blurred image might be too much emphasized. For those pixels an estimated value of the overshooting is compared with a threshold (th1). If the amount of overshooting is larger than a threshold (th1) the corresponding pixel is marked as distorted (the value of the mask is made equal to zero).

The threshold (th1) is defined as percents from the maximum value of the pixels from the observed image (the value MAX is the maximum value of I). Choosing this kind of threshold computation we ensure that the value of the threshold (th1) is adapted to the image range.

At the step 540 the restored image is updated. The pixels that form the restored image are simply updated with the pixels from the de-blurred image that were not marked as distorted. This step can be implemented as follows:

for every pixel from Idb(x,y) if mask(x,y)=1 Ir(x,y)=Idb(x,y); end end

At the step 550, the intermediate output image is scanned and the pixels that still contain overshooting are detected. When the restored image is scanned the horizontal and vertical differences between adjacent pixels can be computed as follows:


dh5(x,y)=Ir(x, y)−Ir(x, y−1)


dh6(x,y)=Ir(x, y)−Ir(x, y+1)


dv5(x,y)=Ir(x, y)−Ir(x−1, y)


dv6(x,y)=Ir(x, y)−Ir(x+1, y)

The sign of the corresponding differences dh5 and dh3, dh6 and dh4, dv5 and dv3, and dv6 and dv4 are compared. If the signs are different then the amount of overshooting may be computed as:

If NOT[sign(dh5(x,y))=sign(dh3(x,y))] OR NOT[sign(dh6(x,y))=sign(dh4(x,y))] H(x,y)=min(abs(dh5(x,y)),abs(dh6(x,y))); end If NOT[sign(dv5(x,y))=sign(dv3(x,y))] OR NOT[sign(dv6(x,y))=sign(dv4(x,y))] V(x,y)=min(abs(dh5(x,y)),abs(dh6(x,y))); end

Comparing the signs of the differences computed on the restored image and on the original image the local shapes of the two images are compared. For pixels where the local shapes differ, the overshooting in the restored image is estimated by taking the minimum of the absolute value of the two adjacent differences. This is computed on both vertical and horizontal directions.

At the step 560 the overshooting is checked. If the maximum overshooting is larger than a predefined step, the restoration procedure is stopped and the restored image Ir is returned at the output. If there is no pixel in the restored image that has overshooting larger than the threshold the parameters of the de-blurring method are changed and the procedure continue from step 520. This step can be implemented as follows:

if max(max(H(x,y)),max(V(x,y)))>=th2*MAX return the image Ir and stop the restoration process else modify the parameters of the de-blurring method and go to step 520. end

The threshold th2 for overshooting detection is defined as percents from the maximum pixel value of the original image I.

The regularization method (530, 550 and 560 from FIG. 5) can also be combined with the above described iterative restoration algorithm from equation (6). Other non-iterative restoration algorithms such as high pass filtering can be implemented in an iterative manner following the above method with local and global regularization. The local and global regularizations defined above can be applied together or separately also to some other iterative restoration techniques.

Image Reconstruction Chain

The previous description of the restoration of each of the colour component is applied as the first operation in the image reconstruction chain. The other operations (140, 260) will follow such as for example Automatic White Balance, Colour Filter Array Interpolation (CFAI), Colour gamut conversion, Geometrical distortion and shading correction, Noise reduction, Sharpening. It will be appreciated that the final image quality (270) may depend on the effective and optimized use of all these operations in the reconstruction chain. One of the most effective implementations of the image reconstruction algorithms are non-linear. In FIG. 1 the image processing continues e.g. with image compression (150) or/and downsampling/dithering (160) process. Image can be viewed (180) by camera viewfinder or display or be stored (170) in compressed form in the memory.

The use of restoration as the first operation in the reconstruction chain ensures the best fidelity to be assumed linear imaging model. The following algorithms, especially the colour filter array interpolation and the noise reduction algorithms act as an additional regularization mechanism to prevent over amplification due to excessive restoration.

Implementation

The system according to the invention can be arranged into a device such as a mobile terminal, a web cam, a digital camera or other digital device for imaging. The system can be a part of digital signal processing in camera module to be installed into one of said devices. One example of the device is an imaging mobile terminal as illustrated as a simplified block chart in FIG. 3. The device 300 comprises optics 310 or a similar device for capturing images that can operatively communicate with the optics or a digital camera for capturing images. The device 300 can also comprise a communication means 320 having a transmitter 321 and a receiver 322. There can also be other communicating means 380 having a transmitter 381 and a receiver 382. The first communicating means 320 can be adapted for telecommunication and the other communicating means 380 can be a kind of short-range communicating means, such as a Bluetooth™ system, a WLAN system (Wireless Local Area Network) or other system which suits local use and for communicating with another device. The device 300 according to the FIG. 3 also comprises a display 340 for displaying visual information. In addition the device 300 comprises a keypad 350 for inputting data, for controlling the image capturing process etc. The device 300 can also comprise audio means 360, such as an earphone 361 and a microphone 362 and optionally a codec for coding (and decoding, if needed) the audio information. The device 300 also comprises a control unit 330 for controlling functions in the device 300, such as the restoration algorithm according to the invention. The control unit 330 may comprise one or more processors (CPU, DSP). The device further comprises memory 370 for storing data, programs etc.

The imaging module according to the invention comprises imaging optics and image sensor and means for finding degradation information of each colour component and using said degradation information for determining a degradation function, and further means for restoring said each colour component by said degradation function. This imaging module can be arranged into the device being described previously. The imaging module can be also arranged into a stand-alone device 410, as illustrated in FIG. 4, communicating with an imaging device 400 and with a displaying device, which displaying device can be also said imaging device 400 or some other device, like a personal computer. Said stand-alone device 410 comprises a restoration module 411 and optionally other imaging module 412 and it can be used for image reconstruction independently. The communication between the imaging device 400 and the stand-alone device 410 can be handled by a wired or wireless network. Examples of such networks are Internet, WLAN, Bluetooth, etc.

The foregoing detailed description is provided for clearness of understanding only, and not necessarily limitation should be read therefrom into the claims herein.

While there have been shown and described and pointed out fundamental novel features of the invention as applied to preferred embodiments thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto. Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.

Claims

1. A method for developing a model for improving image quality of a digital image comprising:

finding degradation information of each of at least one colour component of said image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.

2. The method according to claim 1, wherein a regularization control is applied to the restored colour components.

3. (canceled)

4. (canceled)

5. (canceled)

6. (canceled)

7. (canceled)

8. (canceled)

9. (canceled)

10. A model for improving image quality of a digital image, said model being obtainable by a method as claimed in claim 1.

11. A use of a model according to claim 10 for improving the image quality of a digital image.

12. A method for improving image quality of a digital image comprising:

finding degradation information of each of at least one colour component of the image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.

13. The method according to claim 12, wherein a regularization control is applied to the restored colour components.

14. The method according to claim 12, wherein said degradation information of each colour component is found by a point-spread function.

15. The method according to claim 14, wherein the restoration is implemented by an iterative restoration function being determined from the point-spread function of each colour component.

16. The method according to claim 12, wherein the restoration is implemented by an iterative restoration function where at each iteration a one step de-blurring method with regularization is implemented.

17. The method according to claim 12, wherein said image is unprocessed image data, wherein said restored colour components are further processed by other image reconstruction algorithms.

18. The method according to claim 12, wherein one of the following colour systems is used: red, green, blue; hue, saturation, value; cyan, magenta, yellow, blue; luminance, chrominance.

19. The method according to claim 13, wherein the regularization control is implemented into the de-blurring method for obtaining a de-blurred image.

20. The method according to claim 19, wherein overshooting pixels are detected by a first and a second threshold values.

21. A method for restoration of an image, wherein the restoration is implemented by an iterative restoration function where at each iteration a de-blurring method with regularization is implemented.

22. The method according to claim 21, wherein a regularization control is applied to the restored colour components.

23. The method according to claim 21, wherein the regularization control is implemented into the de-blurring method for obtaining a de-blurred image.

24. The method according to claim 21, wherein overshooting pixels are detected by a first and a second threshold values.

25. An apparatus for determining a model for improving image quality of a digital image comprising:

a control unit configured for finding degradation information of each of at least one colour component of the image,
said control unit configured for obtaining a degradation function according to the degradation information, and
said control unit further configured for restoring said each colour component by said degradation function.

26. The apparatus according to claim 25, wherein the control unit is further configured for applying regularization control during the restoration.

27. The apparatus according to claim 25, wherein the control unit is further configured for further processing said image by other image reconstruction algorithms.

28. The apparatus according to claim 25 being capable of utilizing one of the following colour systems: red, green, blue; hue, saturation, value; cyan, magenta, yellow, blue; luminance, chrominance.

29. The apparatus according to claim 26, wherein for the regularization control, said system control unit is further configured for de-blurring the restored image.

30. An imaging module comprising imaging optics and an image sensor for forming an image through the imaging optics onto the light sensitive image sensor wherein a model for improving image quality as claimed in claim 10 is related to said imaging module.

31. The imaging module according to claim 30, wherein a control unit is further configured for applying regularization control during the restoration.

32. A device comprising an imaging module as claimed in claim 30.

33. The device according to claim 32 being a mobile device equipped with communication capabilities.

34. A program module for improving image quality in a device comprising an imaging module, said program module comprising a control unit configured for:

finding a degradation information of each colour component of the image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.

35. The program module according to claim 34, wherein the control unit further comprises instructions for applying regularization control during the restoration.

36. (canceled)

37. (canceled)

38. A computer program product for improving image quality comprising computer implemented instructions stored on a readable medium, said instructions when executed by a processor for

finding degradation information of each colour component of the image,
obtaining a degradation function according to the degradation information, and
restoring said each colour component by said degradation function.

39. The computer program product according to claim 38, further comprising instructions for applying regularization control during the restoration.

40. A computer program product for a restoration of an image, comprising computer readable instructions for implementing a de-blurring with regularization at each iteration of an iterative restoration.

41. The computer program product according to claim 40, further comprising instructions for detecting overshooting pixels by a first and a second threshold values.

42. An apparatus for determining a model for improving image quality of a digital image comprising:

means for finding degradation information of each of at least one colour component of the image,
means for obtaining a degradation function according to the degradation information, and
means for restoring said each colour component by said degradation function.

43. The apparatus according to claim 42, further comprising:

means for applying regularization control during the restoration.
Patent History
Publication number: 20090046944
Type: Application
Filed: Jan 4, 2005
Publication Date: Feb 19, 2009
Applicant: NOKIA CORPORATION (Espoo)
Inventors: Radu Ciprian Bilcu (Tampere), Sakari Alenius (Lempaala), Mejdi Trimeche (Tampere), Markku Vehvilainen (Tampere)
Application Number: 11/632,093
Classifications
Current U.S. Class: Intensity, Brightness, Contrast, Or Shading Correction (382/274)
International Classification: G06K 9/40 (20060101);