DIGITAL CAMERA AND METHOD

Digital camera contrast enhancement with piecewise-linear transform with lower and upper cutoffs for the transform determined from histogram analysis with a green color conversion approximation for images and IIR parameter filtering for videos.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a divisional of U.S. patent application Ser. No. 11/741,753 filed Apr. 29, 2007, which claims the benefit of provisional U.S. patent application Nos. 60/747,033, filed May 11, 2006, and 60/803,904, filed Jun. 5, 2006, which are herein incorporated by reference.

BACKGROUND OF THE INVENTION

The present invention relates to digital signal processing, and more particularly to architectures and methods for enhancement in digital images and video.

Imaging and video capabilities have become the trend in consumer electronics. Digital cameras, digital camcorders, and video cellular phones are common, and many other new gadgets are evolving in the marketplace. Advances in large resolution CCD/CMOS sensors coupled with the availability of low-power digital signal processors (DSPs) has led to the development of digital cameras with both high resolution still image and short audio/visual clip capabilities. The high resolution (e.g., sensor with a 2560×1920 pixel array) provides quality offered by traditional film cameras.

FIG. 2a is an example functional block diagram for digital camera control and image processing (“image pipeline”). The automatic focus, automatic exposure, and automatic white balancing are referred to as the 3A functions; and the image processing includes functions such as color filter array (CFA) interpolation, gamma correction, white balancing, color space conversion, and JPEG/MPEG compression/decompression (JPEG for single images and MPEG for video clips). Note that the typical color CCD consists of a rectangular array of photosites (corresponding to pixels in an output image) with each photosite covered by a filter (the CFA): typically, red, green, or blue. In the commonly-used Bayer pattern CFA one-half of the photosites are green, one-quarter are red, and one-quarter are blue. FIGS. 2b-2d show an alternative architecture with FIG. 2c illustrating the video frontend of the FIG. 2b processor, and FIG. 2d the preview engine (PRV) of the processor.

Typical digital cameras provide a capture mode with full resolution image or audio/visual clip processing plus compression and storage, a preview mode with lower resolution processing for immediate display, and a playback mode for displaying stored images or audio/visual clips.

High contrast images are appealing to human eyes. However, it is difficult to obtain high contrast images from video or still cameras or camera phones, due to the limitations of the sensors, image processors, and displays. Many contrast enhancement methods have been proposed for image processing applications. But they are either too complex to be used for consumer video or still cameras, or specific for different imaging applications such as biomedical imaging. A desirable method for digital cameras should be universal, because digital cameras will be used to capture different kinds of images. It also should have low computation complexity and low memory requirements, due to the cost and shot-to-shot constraints of digital cameras.

Starck et al., “Gray and Color Image Contrast Enhancement by the Curvelet Transform”, 12 IEEE Trans. Image Processing, 706 (June 2003) discloses a complex transform on images, resulting in high computational complexity and high memory requirements.

SUMMARY OF THE INVENTION

The present invention provides image and/or video contrast enhancement with low complexity by piecewise-linear transform with saturation values determined by histogram analysis.

FIGS. 1a-1d are a flowchart plus enhancement function graphs and a heuristic histogram.

FIGS. 2a-2e illustrate an image pipeline, processor, and network communication.

FIGS. 3a-3b show experimental results.

DESCRIPTION OF THE INVENTION

Preferred embodiment image (still image or individual video frame) contrast enhancement methods apply a piecewise-linear transform with the transform cutoffs determined as a fraction of the peak histogram count for an estimated green component. The enhancement typically would precede gamma correction in the image pipeline. Video preferred embodiment methods include parameter filtering for smooth contrast enhancements from frame to frame and the use of prior frame statistics for one-pass enhancement.

Preferred embodiment systems (camera cellphones, digital cameras, PDAs, etc.) perform preferred embodiment methods with any of several types of hardware: digital signal processors (DSPs), general purpose programmable processors, application specific circuits, or systems on a chip (SoC) such as combinations of a DSP and a RISC processor together with various specialized programmable accelerators. FIG. 2b is an example of digital camera hardware. A stored program in an onboard or external (flash EEP)ROM or FRAM could implement the signal processing. Analog-to-digital converters and digital-to-analog converters can provide coupling to the real world, modulators and demodulators (plus antennas for air interfaces) can provide coupling for transmission waveforms, and packetizers can provide formats for transmission over networks such as the Internet; see FIG. 2e.

The first preferred embodiment contrast enhancement methods for images use a piecewise-linear transform of pixel values as shown in FIG. 1b; the scale in FIG. 1b corresponds to 8-bit (0-255) data. The piecewise-linear transform provides the advantage of linear transform simplicity for implementation. The linear transform is a constant mapping for the lower and higher values because human eyes are less sensitive to luminance variation in these two ranges than in the middle intensity range. This linear transform is performed before the gamma correction to take advantage of perceptual uniformity given by the gamma correction; that is, the contrast enhancement method inserted into the example of FIG. 2d could be applied between color conversion (RGB-to-RGB) and gamma correction. To achieve universal contrast enhancement, the two parameters, S1 and S2, of the transform would be image dependent. To keep color information consistent for an image, the same linear transform is applied independently to each of the R, G, and B components of each pixel.

The preferred embodiment image methods decide the cutoff points S1 and S2 in the linear transform function based upon the image characteristics. Luminance values are frequently used in contrast enhancement. Based on Rec709, the luminance level of each pixel is computed from the red, green, and blue component values as:


Y=0.2125*R+0.7154*G+0.0721*B

Note that the green component contributes the most energy to the luminance. For low complexity, it is reasonable to use the green component only. Experiments also show that this simplification does not introduce any visual difference. Thus it would seem plausible to use the minimum and the maximum values of the green components of all pixels as the S1 and S2 values, respectively. However, this does not work well. First, for most images with appropriate exposure, the minimum and maximum pixel luminance or green values will be very close to 0 and 255, respectively. Second, the minimum and maximum values are usually determined by the outliners in an image.

The preferred embodiment image contrast enhancement methods use histogram analysis of an image to choose S1 and S2 as follows.

(a) Compute the histogram of the green component values for all pixels.

(b) Find the histogram peak, nP, which is the count of the most frequent green component value.

(c) Compute the cutoff number nC=nP*0.02.

(d) Find the value S1 as the smallest integer (in range 0 to 255) such that the number of pixels with green component value less than S1 is larger than nC.

(e) Find the value S2 as the largest integer (in the range 0 to 255) such that the number of the pixels with green component value greater than S2 is larger than nC.

FIG. 1c heuristically illustrates a histogram with nP, nC, S1, and S2 labeled.

For example, with a 2560×1920 image there are 4,915,200 pixels and so the average count for each of the 256 possible green (or luminance) values is 19,200. Then with a peaky distribution of values, the maximum count, nP, may be very roughly about ten times the average; i.e., about 200,000. For this maximum count, the 2% cutoff, nC, would then be 4,000, which is about 20% of the overall average.

Alternative values for nC, such as 3% or 5% of nP, could be used limit noise at the extreme values or a small very dark or very bright area from determining S1 and S2. Indeed, lower values (e.g., 1%) or higher values (e.g., 9%) may also be useful. Similarly, the very dark and very bright pixels could be ignored by disregarding the extremes in the histogram, such as the counts for values in the range 0 to 5 and 250 to 255 would ignored in computing S1 and S2.

For higher resolution data, such as 12-bit green or luminance values, the number of possible values increases, e.g, to 4,096, but the histogram will have the same general shape, although more noisy looking, and the cutoff, nC, to determine S1 and S2 would still be computed as a fraction of the peak count, nP.

The main computational complexity of the foregoing methods lies in the histogram computation plus the linear transform computation. A straightforward way to do the histogram computation is after the RGB-to-RGB color level conversion.

[ R G B ] = [ M 11 M 12 M 13 M 21 M 22 M 23 M 31 M 32 M 33 ] [ R G B ] + [ O 1 O 2 O 3 ]

However, since an image is processed by all ISP functions row by row, two passes of ISP processing are required. There are two ways to lower complexity. One way computes the histogram using downsampled data from a preview engine; for example, the preview engine shown in FIG. 2d.

Another way is to perform the histogram computation before the ISP processing. The difficulty is how to estimate the histogram of processed green components based on Bayer raw data before processing. Note that after processing the processed green component is totally different from Gr and Gb components in the Bayer data. The first preferred embodiment methods use a simplified model for these processing. The first preferred embodiment methods estimate downsampled processed green components based on R, Gr, Gb, and B in a 2×2 neighbor of Bayer data by:


G=(WRM2,1R+WGM2,2Gr/2+WGM2,2Gb/2+WBM2,3B)D+O2

Where D is the digital gain, the Wi are the white balancing gains, and M2,i and O2 are the RGB-to-RGB conversion coefficients for the processed green component. That is, the contrast enhancement histogram uses these values of G.

Analogous to the image contrast enhancement preferred embodiments, video contrast enhancement preferred embodiment methods use a piecewise linear transform for each video frame to enhance contrast. FIG. 1d is a graph of the piecewise linear function. As with the single image preferred embodiments, the main advantage of the linear transform is simplicity of implementation. The piecewise-linear transform is a constant mapping for the lower and higher values, because human eyes are less sensitive to luminance variation in these two ranges than in the middle range. For low complexity, this linear transform is performed on the luminance component Y directly, which does not change the color information. To achieve universal contrast enhancement, the two cutoff parameters, L and H, of the linear transform are image dependent. That is, the preferred embodiment methods must compute the cutoff values {Ln} and {Hn}, where the subscript denotes the nth frame in a video sequence, based on the video characteristics.

It seems plausible to use the minimum and the maximum values of the luminance components of all pixels as L and H, respectively. However, it does not work well. First, for most videos with appropriate exposure, their minimum and maximum will be very close to 0 and 255, respectively. Second, the minimum and maximum values are usually determined by the outliners in the images.

Consequently, the preferred embodiment methods first analyze the histogram of each video frame as follows.

(1) Compute the histogram of the luminance components of the nth frame.

(2) Find the histogram peak, Pn, as the count of the most frequent luminance value.

(3) Compute the cutoff number, Cn=Pn*0.02.

(4) Find the lower value An such that the number of the pixels with luminance lower than An is just larger than G.

(5) Find the upper value Bn such that the number of the pixels with luminance higher than Bn is just larger than G.

Then compute Ln and Hn for each video frame using a simple two-tap infinite impulse response (IIR) filter as follows. The purpose of this filtering is to smooth the contrast change between adjacent frames.


Ln=0.75*Ln-1+0.25*An


Hn=0.75*Hn-i+0.25*Bn

That is, perform the contrast enhancement in the following order:

(a) Compute the histogram of the values of the luminance components, Yi,n, of the pixels of the nth frame.

(b) Update the parameters Ln and Hn by the IIR filtering using the An and Bn from the histogram of (1)-(5) above.

(c) Transform each luminance value in the nth frame by:


If Yi,n≦Ln, then Yi,n=0


If Ln<Yi,n<Hn, then Yi,n=255*(Yi,n−Ln)/(Hn−Ln);


If Yi,n<Hn, then Yi,n=255

Repeat (a)-(c) for successive frames.

Again, other multipliers for the cutoff could be used such as: Cn=Pn*0.0m with m an integer in the range 1 to 9. Also, IIR filter coefficients other than 0.25 and 0.75 could be used for faster or slower adaptation. And when the color component values are in a range other than 0 to 255 (e.g, 12-bit colors with a range of 0 to 4095), then the contrast enhancement transformation would adjust accordingly: Yi,n=min+(max−min)*(Yi,n−Ln)/(Hn−Ln), Yi,n=max, or Yn=min, where max and min are the maximum and minimum, respectively, of the component value range.

The preceding video contrast enhancement implementation needs to go through luminance components in two passes (accumulating the histogram and performing the piecewise-linear transform). Some system may require a one-pass solution. In that case, we will use the parameters computed from a previous frame for the current frame. Specifically, we perform the video contrast enhancement in the following single-pass order:

(1) For first/next pixel luminance value, accumulate it in a histogram array


Histn[Yi,n]=Histn[Yi,n]+1

(2) For first/next pixel luminance value, apply piecewise-linear transformation using the prior frame's cutoff parameters


If Yi,n≦Ln-1, then Yi,n=0


If Ln-1<Yi,n<Hn-1, then Yi,n=255*(Yi,n−Ln-1);


If Yi,n≧Hn-1, then Yi,n=255.

(3) Repeat (1)-(2) for remaining pixels in nth frame

(4) Update the parameters Ln and Hn based on the completed histogram Histn.

The main computation of the method is histogram computation and linear transform. Both of them have low complexity. Of course, analyzing the histogram of only every other (or less frequent) frame, and consequently updating Ln and Hn only every other (or less frequent) frame would further reduce computational complexity.

FIGS. 3a-3b show an example of preferred embodiment contrast enhancement. The FIG. 3a image is before contrast enhancement and the FIG. 3b image is after contrast enhancement method.

Claims

1. An image contrast enhancement method, comprising the steps of:

(a) computing a histogram of the values of a component of the pixels of an input image;
(b) finding the histogram peak, nP, as the count of the most frequent component value;
(c) compute a cutoff, nC, as nC=nP*0.0m where m is a positive integer in the range 1 to 9;
(d) find a first value, S1, as the smallest component value such that the number of pixels with component value less than S1 is larger than nC;
(e) find a second value, S2, as the largest component value such that the number of pixels with component value greater than S2 is larger than nC; and
(f) apply a piecewise-linear transform to each color in said image, said transform with lower cutoff equal said S1 and upper cutoff equal said S2.

2. The method of claim 1, wherein said component is a luminance component.

3. The method of claim 1, wherein said component is a green component.

4. The method of claim 1, wherein said component is a green estimate over a 2×2 array of said pixels with each pixel having only a single color component.

5. The method of claim 4, wherein said 2×2 array is a R-Gr×Gb-B array from a Bayer pattern data and said estimate uses RGB-to-RGB parameters.

Patent History
Publication number: 20100321520
Type: Application
Filed: Aug 24, 2010
Publication Date: Dec 23, 2010
Applicant: Texas Instruments Incorporated (Dallas, TX)
Inventor: Jianping Zhou (Richardson, TX)
Application Number: 12/862,458
Classifications
Current U.S. Class: Combined Image Signal Generator And General Image Signal Processing (348/222.1); Histogram Processing (382/168); 348/E05.031
International Classification: H04N 5/228 (20060101); G06K 9/00 (20060101);