Robust reconstruction of high resolution grayscale images from a sequence of low-resolution frames (robust gray super-resolution)

A method for computing a high resolution gray-tone image from a sequence of low-resolution images uses an L1 norm minimization. In a preferred embodiment, the technique also uses a robust regularization based on a bilateral prior to deal with different data and noise models. This robust super-resolution technique uses the L1 norm both for the regularization and the data fusion terms. Whereas the former is responsible for edge preservation, the latter seeks robustness with respect to motion error, blur, outliers, and other kinds of errors not explicitly modeled in the fused images. This computationally inexpensive method is resilient against errors in motion and blur estimation, resulting in images with sharp edges. The method also reduces the effects of aliasing, noise and compression artifacts. The method's performance is superior to other super-resolution methods and has fast convergence.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority from U.S. provisional patent application No. 60/637282 filed Dec. 16, 2004, which is incorporated herein by reference.

STATEMENT OF GOVERNMENT SPONSORED SUPPORT

This invention was supported in part by the National Science Foundation under grant CCR-9984246 and by the US Air Force under contract F49620-03-01-0387. The U.S. Government may have certain rights in the invention.

FIELD OF THE INVENTION

This invention relates generally to high resolution image restoration and reconstruction. More particularly, it relates to a method for computing a high resolution gray-tone image from a sequence of low-resolution images.

BACKGROUND OF THE INVENTION

Super-resolution image reconstruction is a kind of digial image processing that increases the resolvable detail in images. The earliest techniques for super-resolution generated a still image of a scene from a collection of similar lower-resolution images of the same scene. For example, several frames of low-resolution video may be combined using super-resolution techniques to produce a single still image whose resolution is significantly higher than that of any single frame of the original video. Because each low-resolution frame is slightly different and contributes some unique information that is absent from the other frames, the reconstructed still image has more information, i.e., higher resolution, than that of any one of the originals alone. Super-resolution techniques have many applications in diverse areas such as medical imaging, remote sensing, surveillance, still photography, and motion pictures.

The details of how to reconstruct the best high-resolution image from multiple low-resolution images is a complicated problem that has been an active topic of research for many years, and many different techniques have been proposed. One reason the super-resolution reconstruction problem is so challenging is because the reconstruction process is, in mathematical terms, an under-constrained inverse problem. In the mathematical formulation of the problem, the known low-resolution images are represented as resulting from a transformation of the unknown high-resolution image by effects of image warping due to motion, optical blurring, sampling, and noise. When the model is inverted, the original set of low-resolution images does not, in general, determine a single high-resolution image as a unique solution. Moreover, in cases where a unique solution is determined, it is not stable, i.e., small noise perturbations in the images can result in large differences in the super-resolved image. To address these problems, super-resolution techniques require the introduction of additional assumptions (e.g., assumptions about the nature of the noise, blur, or spatial movement present in the original images). Part of the challenge rests in selecting constraints that sufficiently restrict the solution space without an unacceptable increase in the computational complexity. Another challenge is to select constraints that properly restrict the solution space to good high-resolution images for a wide variety of input image data. For example, constraints that are selected to produce optimal results for a restricted class of image data (e.g., images limited to pure translational movement between frames and common space-invariant blur) may produce significantly degraded results for images that deviate even slightly from the restricted class. In summary, super-resolution techniques should be computationally efficient and produce desired improvements in image quality that are robust to variations in the properties of input image data.

SUMMARY OF THE INVENTION

One popular approach to super-resolution known in the art is based on a maximum likelihood (ML) estimator that uses the L2 norm (i.e., least-squares). The inventors have discovered that this least-squares-based approach is not robust, and produces images with visually apparent errors in some cases (e.g., images with non-Gaussian noise). Upon further investigation, the inventors discovered a superior ML estimator which is based on the L1 norm instead of the L2 norm (i.e., least-squares). They have demonstrated that this L1 norm has a higher breakpoint value and is a demonstrably more robust estimator than the prior L2 norm. In the case of pure translational motion, the method may be implemented using an extremely efficient pixel-wise “shift-and-add” technique.

It is known in the art of super-resolution to introduce a regularization term into the model to help stabilize the solution, remove image artifacts, and improve the rate of convergence. The regularization term compensates for missing measurement information by introducing some general information about the desired super-resolved solution, and is often implemented as a penalty factor in the generalized minimization cost function. A common regularization cost function is the class of Tikhonov cost functions, which is based on the L2 norm and constrains the total image energy or imposes spatial smoothness. This type of regularization term, however, removes sharp edges along with image noise. A regularization term that preseves edges better is the total variation (TV) method which limits the total change in the image as measured by the L1 norm of the magnitude of the gradient. The inventors have discovered that the TV method may be improved by combining it with a bilateral filter to provide a very robust regularization method, which they call bilateral TV. They have shown that bilateral TV not only produces sharp edges and retains point-like details in the super-resolved image but also allows for computationally efficient implementation superior to other regularization methods.

Accordingly, in one aspect, the invention provides a method for computing a high resolution gray-tone image from a sequence of low-resolution images using an L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This robust super-resolution technique uses the L1 norm both for the regularization and the data fusion terms. Whereas the former is responsible for edge preservation, the latter seeks robustness with respect to motion error, blur, outliers, and other kinds of errors not explicitly modeled in the fused images.

This computationally inexpensive method is resilient against errors in motion and blur estimation, resulting in images with sharp edges. The method also reduces the effects of aliasing, noise and compression artifacts. The method's performance is superior to other super-resolution methods and has fast convergence.

DETAILED DESCRIPTION

Details of various embodiments of the present invention are disclosed in the following appendices:

  • Appendix A: Sina Farsiu, Dirk Robinson, Michael Elad, Peyman Milanfar “Fast and Robust Multiframe Super Resolution” IEEE Trans. Image. Processing, October 2004, Vol. 13, No. 10, pp. 1327-1344.
  • Appendix B: Sina Farsiu, Dirk Robinson, Michael Elad, Peyman Milanfar “Advances and Challenges in Super-Resolution” International Journal of Imaging Systems and Technology, August 2004, Vol. 14, No 2, pp. 47-57.

As one of ordinary skill in the art will appreciate, various changes, substitutions, and alterations could be made or otherwise implemented without departing from the principles of the present invention. Accordingly, the examples and drawings disclosed herein including the appendix are for purposes of illustrating the preferred embodiments of the present invention and are not to be construed as limiting the invention.

Claims

1. A computer-implemented method for super-resolution, the method comprising:

computing a super-resolved image from a plurality of lower-resolution images using a maximum liklihood estimator based on an L1 norm minimization.

2. The method of claim 1 wherein the computing further comprises using a bilateral total variation regularization term.

Patent History
Publication number: 20060291751
Type: Application
Filed: Dec 12, 2005
Publication Date: Dec 28, 2006
Inventors: Peyman Milanfar (Menlo Park, CA), Sina Farsiu (Santa Cruz, CA), Michael Elad (Halfa), Michael Robinson (Menlo Park, CA)
Application Number: 11/302,073
Classifications
Current U.S. Class: 382/299.000
International Classification: G06K 9/32 (20060101);