Image processing device with automatic white balance

-

According to one aspect of the present invention, there is provided a digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged: to divide the frame image data into a plurality of blocks including a plurality of pixel data; to calculate a predetermined value for the each color element and for each of all or a part of the plurality of blocks; to judge, for each of all or a part of the plurality of blocks, whether the block being likely a part of a grey object or not to cumulate the predetermined values of the blocks judged as being likely a part of a gray object for the respective color elements, and; to decide a first set of digital gains for adjusting white balance based on the cumulated values.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FILED OF THE INVENTION

The invention relates to the image processing algorithms that are used in digital cameras. More accurately, the invention relates to Automatic White Balance (AWB) algorithms.

BACKGROUND OF THE INVENTION

Human visual system compensates for different lighting conditions and color temperatures so that white objects are perceived as white in most of the situations. AWB algorithms that are used in digital cameras try to do the same for raw images that are captured by digital camera sensors. That is, AWB adjusts the gains of different color components (e.g. R, G and B) with respect to each other in order to make white objects white, despite the color temperature differences of the image scenes or different sensitivities of the color components.

One of the existing methods of the AWB is to calculate averages of each color component, and then apply such gains for each color component so that these averages become equal. These types of methods are often called “grey world” AWB algorithms.

SUMMARY OF THE INVENTION

The purpose of this invention is to provide sophisticated AWB mechanisms.

According to one aspect of the present invention, there is provided a digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged:

    • to divide the frame image data into a plurality of blocks including a plurality of pixel data;
    • to calculate a predetermined value for the each color element and for each of all or a part of the plurality of blocks;
    • to judge, for each of all or a part of the plurality of blocks, whether the block being likely a part of a grey (white) object or not;
    • to cumulate the predetermined values of the blocks judged as being likely a part of a gray (white) object for the respective color elements, and;
    • to decide a first set of digital gains for adjusting white balance based on the cumulated values.

According to another aspect of the present invention, there is provided an electric circuit for processing a frame of image data comprising a plurality of color elements, wherein the said circuit comprising:

    • a block extractor extracting a block of the image data, the block including a plurality of pixel data;
    • an average calculator calculating average values of each color component of the extracted block;
    • a judging unit judging whether the block being potentially white or not;
    • a cumulator cumulating the average values of a plurality of the extracted blocks whose color being judged as potentially white for the respective color elements, and;
    • a decision unit deciding a set of digital gains for adjusting white balance based on the cumulated values.

Further features and advantages of the aspects of the present invention will be described by using an exemplary embodiment. Please note that the present invention includes any advantageous features or combinations thereof described in this specification, accompanying claims and accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present invention will now be described by way of example only and with reference to accompanying drawings in which:

FIG. 1 is a schematic block diagram illustrating main components of a camera device according to a preferred embodiment of the present invention.

FIG. 2 is a schematic block diagram illustrating main components of the AWB analyzer 17 according to the preferred embodiment.

FIG. 3 is a flow chart for explaining the image processing of the AWB analyzer 17 according to the preferred embodiment.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

FIG. 1 is a schematic block diagram illustrating main components of a camera device according to a preferred embodiment of the present invention. The camera device 1 comprises a sensor module 11, a pre-processing circuit 13, a white balance (WB) amplifier 15, an auto white balance (AWB) analyzer 17, a post-processing circuit 19, a display 21, and a memory 23. The sensor module 11 can be used for generate images for viewfinder and take still pictures or videos. The sensor module 11 comprises a lens, an image sensor with a RGB Bayer color filter, an analog amplifier, an A/D converter, and convents an incident light into a digital signal. This digital signal can be called as a Raw data, and has a RGB Bayer format i.e., each 2x2 pixel block comprises two Green data, one Red data, and one Blue data. In another embodiment the color filter may be a CMY Bayer filter. In the other embodiments, the sensor module 11 may not use a Bayer color filter, but comprises a Foveon-type sensor i.e. the sensor which records image signals of different wavelengths in different depths within the silicon). The sensor module 11 may also comprises actuators for auto-focus and zoom and aperture control and ND-filter control.

The pre-processing circuit 13 may perform e.g. noise reduction, pixel linearization, and shading compensation. The WB amplifier 15 adjusts the gains of different color components (i.e. R, G and B) with respect to each other in order to make white objects white. The amount of gaining is decided by the AWB analyzer 17. The AWB analyzer 17 analyzes the raw data to calculate the amount of gaining, and set the digital gains of the WB amplifier 15 according to the result of the calculation. In addition to color balance, the AWB analyzer 17 may also include overall digital gain part in the WB gains to increase image brightness and contrast by stretching adaptively the histograms towards the bright end. The AWB analyzer 17 may also calculate R, G and B offsets to increase the image contrast by stretching adaptively the histograms towards the dark end. Those offsets should be taken into account in the white balance gains, and they can be applied to data prior to or after WB gaining. The AWB analyzer 17 may also calculate an amount and shape of non-linear correction, which may be applied to the image data at the post-processing circuit 19.

The post processing circuit 19 performs CFA (Color Filter Array) interpolation, color space conversion, gamma correction, RGB to YUV conversion, image sharpening, and so on. The post processing circuit 19 may be hardware pipelines for performing one or all of those processing, or may comprise a processor for performing those processing by software processing. The image data processed by the post processing circuit 19 may be displayed on the display 21 or stored in the memory 23. Preferably the memory 23 is a removable storage means such as a MMC card or a SD card.

FIG. 2 is a schematic block diagram illustrating main components of the AWB analyzer 17 according to the preferred embodiment of the present invention. The AWB analyzer 17 comprises a block extractor 31, a saturation checker 33 and average calculators for Red (R) component 35, for Green (G) component 37, and for Blue (B) component 39. The AWB analyzer 17 also comprises a bus 39, a CPU 41, a firmware 42, and a RAM 43.

The block extractor 31 divides a frame of the raw data into small blocks, and provides the raw data to the saturation checker 33 block by block. A frame in this embodiment means a total area of a still picture or a single picture in a series of video data. Each block contains a plurality of red, green, and blue pixel data. As an example, when the resolution of the sensor module 11 is 1.3 mega pixels, preferably the block extractor 31 is arranged to divide a frame into 72×54 blocks. Of course any ways of dividing are possible as long as they meets the requirement of speed, quality, price and so on.

The saturation checker 33 is connected to the bus 39, and checks a block whether the block contains a saturated pixel or not. The saturation checker 33 also checks whether the block in process is adjacent to the other saturated block or not. This information, i.e. whether the block does not contain any saturated pixels, contains a saturated pixel, or is adjacent to a saturated block, is sent to the CPU 41. The information may also contain a number of saturated pixels within a block. The number of saturated pixels is useful in determining how reliable the saturated block is, if saturated blocks need to be used in step 150 or step 180 due to big block sizes etc.

The average calculators 35, 36, and 37 are connected to the bus 39, and calculate average values of Red components, Green components, and Blue components of the block in process respectively. If the block in process contains saturated pixels, the average calculators calculate the average values of only non-saturated pixels. The calculated average values are provided to the CPU 41 though the bus 39. The block extractor 31, the saturation checker 33 and the average calculators 35, 36, and 37 may be hardware circuits. But they can also be implemented as software processing by using a CPU. In the latter case the way of dividing may be programmable.

The CPU 41 stores the saturation information and the R, G, B respective average values for each block in the RAM 43 according to instructions of the firmware 42. The firmware 42 is software which instructs the CPU 41 to perform the necessary data processing. The CPU 41 performs further calculations to decide the amount of digital gains based on this information according to the instructions of the firmware 42.

Referring to FIG. 3, the image processing of the AWB analyzer 17 will be described below in detail. Step 110 shows the start of analysis. In step 120, a frame of image data to be analyzed is provided from the pre-processing module 13. As explained above, the image data to be analyzed in this embodiment is a Raw data with RGB Bayer format. Please note that this invention can be applied for other data format, e.g. CMY Bayer format or a data format containing the same number of color elements for all the color components, or a data format in which each pixel contains all used color components. The

In step 130, the block extractor 31 divides a frame of the raw data into small blocks, and provides the raw data to the saturation checker 33 block by block. As explained above the frame in this embodiment means a total area of a still picture or a single picture in a series of video data. And each block contains a plurality of red, green, and blue pixel data.

In step 140, the saturation checker 33 checks each block whether the block does not contain any saturated pixels, or contains a saturated pixel, or is adjacent to a saturated block. And the average calculators 35, 36, and 37 calculate average values of Red components, Green components, and Blue components of the block in process respectively. If the block in process contains saturated pixels, the average calculators calculate the average values of only non-saturated pixels.

As the result of the step 140 processing, the AWB analyzer 17 will obtain a set of block information. The number of block information is the same as the number of blocks, and each block information contains saturation information and average values for R, G, and B components of the block. This set of block information is stored in the RAM 43.

In step 150, the CPU 41 scans, according to the instruction of the firmware 42, all of the block information, and calculates a set of statistic values from the block information. These statistic values include histograms of the red component, green component, blue component, and luminance of the whole of the raw data in process. The luminance in this embodiment is defined by (r+2g+b)/4, where the r, g, and b represent the average values of red, green and blue component of a block. Those statistic values also include the average values of red, green and blue components of the raw data in process, and the maximum luminance value. The maximum luminance value is a luminance of the brightest block.

Some of these statistic values will be used to make criteria to whether it is likely that the block is part of grey object or not. Also some of these statistic values will be used to make criteria to decide intensity ranges.

If block information of a block shows that the block contains a saturated pixel, or the block is next to a saturated block, the CPU 41 excludes this block information from the calculation of these statistic values. The reason is that the saturated pixel values are not reliable, because the information about the relations between different color components is lost. The pixels that are close to saturated pixels are not preferred either because the pixel response is very non-linear when close to saturation, also possibility of electrons bleeding into neighboring pixels and increased pixel cross-talk. As the CPU 41 excludes the blocks that contain saturated pixels or blocks adjacent to such blocks, the AWB gain analysis in this embodiment is not sensitive to inclusion of saturated pixels. In another words the AWB gain analysis in this embodiment is not affected by the pixel saturations. This is one of an important advantage of this embodiment.

In another embodiment, the block information of the blocks containing a saturated pixel or the blocks adjacent to the saturated block (the block containing a saturated pixel) may be used for calculating the above statistic values. This embodiment may be beneficial when the frame is divided into relatively large blocks, e.g. the frame is divided into 12×32 blocks. In such embodiment rejecting a block would lose big area of the image. For such cases, the blocks adjacent to the saturated block or all blocks may be utilized, even they may not be reliable enough, or the averages of the non-saturated pixels within the saturated blocks may also be utilized.

In step 160, the CPU 41 sets, according to the instruction of the firmware 42, boundary conditions for determining intensity ranges of each block. For example, the CPU 41 and the firmware 42 calculates 99.8%, 80%, 50% and 25% locations for the histograms of the red component, the green component, the blue component, and the luminance obtained in step 150. The xx% location in histogram in this embodiment means that xx% of the pixels are darker than this value. After this step these values are used instead of the histograms obtained in step 150.

In step 160, the CPU 41 and the firmware 42 also set boundary conditions for the R/G, B/G, min(R,B)/G and max(R,B)/G relations that are used to judge whether a block might be gray (white) or not. Here the R, G, and B represent average values of the red, green and blue component in the block to be judged. Min(R,B)/G represents a value dividing a smaller one of the R and the B by the G, and max(R,B)/G represents a value dividing a larger one of the R and the B by the G. In one embodiment, these boundary conditions may be fixed according to known RGB response to different color temperatures, and then modified according to the sensor characteristics (e.g. different color filter characteristics between different types of camera sensors cause variation in the required limits). In another embodiment, these criteria (boundary conditions) are decided based on the statistic values obtained in step 150.

The values for boundary conditions may also be used for detecting special case lighting conditions of the image data in step 250, i.e. whether the lightning condition of the scene of the image data is such that it needs special adjustment to produce pleasing and accurate colors. Such lighting condition analysis may improve the performance of the white balancing, especially in case when there is little or no proper reference white available in the scene.

From the step 170 to step 240, the CPU 41 and the firmware 42 judge for the respective blocks whether color of the blocks could correspond to gray(/white) object or not, and cumulate the RGB average values of the white/gray blocks. The cumulated values are used for deciding the digital gains for white balancing.

However, before the judgment and the cumulation, the CPU 41 and the firmware 42 exclude the saturated blocks and the blocks next to the saturated blocks from those operations (step 180). The reason of excluding those blocks is, as explained above, that they are not reliable enough. This is another reason why the AWB gain analysis in this embodiment is not sensitive to inclusion of saturated pixels. This is one of an important advantage of this embodiment. In one embodiment, if the block size were really big, then the blocks that are next to saturated blocks may not be rejected. In another embodiment, even the saturated blocks may not be rejected, and non-saturated averages of the saturated blocks may be used in the following steps for some of the special case intensity ranges, at least if the block size is relatively large.

In step 200, the CPU 41 and the firmware 42 check whether a block belongs to one or more of the intensity ranges or not. Various types of intensity ranges can be defined. For example, the intensity range C may be defined so as to cover the intensity range of the range A and the range B. Therefore a block belongs to the range A is also belongs to the range C. In general, a block can belong to the several intensity ranges.

In one embodiment, it is possible to define a first intensity range, and a second intensity range which includes the range of the first intensity range. It may be also possible to define a third intensity range which includes the range of both the first and the second intensity range. Or, an intensity range whose range is very narrow, or is sensitive for the special lighting condition, i.e. a very bright, very dark, a blue sky, clouded, tungsten and so on, may be defined.

In step 210, the CPU 41 and the firmware 42 judge whether the block in process is potentially a part of white (gray) object or not.

In one embodiment, whether a block could be gray (white) or not is determined by comparing the R, G and B average values of the block to each other, taking into account certain sensor-specific pre-gains that normalize the sensor response into e.g. 5000K color temperature. Relations between different color components that correspond to different black body radiator color temperatures can be used as a starting point for the limits, but additional variance needs to be allowed because the black body radiator does not represent accurately all possible light sources (e.g. fluorescent light sources have large variance in spectral response and some of them are not close to the black body radiator).

In the preferred embodiment, the CPU 41 and the firmware 42 define the limits for the purpose of gray (white) judgment as explained in step 160. In step 210 the CPU 41 and the firmware 42 checks whether R/G, B/G, min(R,B)/G and max(R,B)/G satisfy the criteria decided in step 160, and judge the blocks satisfying those criteria as the gray or white blocks. The criteria for gray (white) judgment can be different for the each intensity range. Also those criteria can be different depend on the estimated lighting conditions. There are separate color limits for high likelihood white and lower likelihood white. Bigger weight in the cumulative values is given to high likelihood blocks in comparison to lower likelihood blocks.

In step 220, the CPU 41 and the firmware 42 cumulates the respective red, green, and blue average values of the blocks judged as white or gray in the intensity range in process. As shown by the loop from step 190 to step 230, the gray judgment is performed for each intensity range. And in each intensity range, the red, green and blue averages are accumulated into each variable if the block values are within the limits set in step 160. (The block values means R/G, B/G, min(R,B)/G and max(R,B)/G, see the description of step 160.)

When scanning all the blocks is finished, in step 240, a set of cumulated values of R, G, and B components will be created for each intensity range. These cumulative values are then temporarily stored in the RAM 43, and will be used to calculate the white balance gains later.

In step 250, the CPU 41 and the firmware 42 detect special cases that need additional color balance tuning in order to produce accurate and pleasing colors. For example, bright blue sky scenes can be detected and known blackbody radiator color temperature of bright blue sky can be used together with grey blocks to determine the final color temperature. The values for determining boundary conditions obtained in step 160 will be used for identifying such special color cases. Also the total exposure that consists of exposure time and analogue gain (and digital gain if it has been applied for the raw data, and also aperture and focal length if those can be adjusted in the system) may also be utilized for identifying the special color cases. The result of the special case detection will be considered for the digital gain calculation in step 310.

An example of special case may be a blue sky. An image scene that is captured in bright daylight lighting condition and consists of bright blue sky and possibly some faint cloud haze can contain a lot of false reference white. Therefore detecting such case and considering such case in the digital gain calculation will improve the AWB performance. Other examples of the special cases are the images of candles and fireplaces. In those cases more pleasing (in this case maybe not more accurate but still more pleasing) colors can be obtained by detecting those and modifying the AWB gains.

In step 260, the CPU 41 and the firmware 42 calculate a certain goodness score of the grey-world algorithm so that also grey-world algorithm can be included in the final result if there seems to be very little reference white available and high goodness-score for the grey-world algorithm. The correction by the grey-world algorithm will be performed in step 320.

The meaning of the loop starting from step 270 will be explained later. In step 290, the CPU 41 and the firmware 42 mix the cumulated values obtained from the different intensity ranges in step 170-240 for the respective color elements. So a set of R, G, and B mixed values are created in this step. In this embodiment, only one a part of the intensity ranges will be used for this mixing. A set of intensity ranges used for the mixing is selected in step 280. The mixed values will be used to decide the white balance gains. For the mixing, the CPU 41 and the firmware 42 may be arranged to give a weight on a cumulated value according to how many possible white blocks have been found from its intensity range. For example, a cumulated value whose intensity range contains a lot of gray (white) blocks will be given a bigger weight at the mixing. Or, the CPU 41 and the firmware 42 may be arranged to give bigger weights for cumulated values of the specific intensity ranges. For example, as the brighter ranges are considered to be more reliable, the cumulated value of the brightest intensity range will be given the biggest weight at the mixing. Giving weight may be done in advance in step 220. If very little amount of grey blocks are found from the primary intensity range, then additional intensity ranges can be added to the cumulative values.

In step 300, the CPU 41 and the firmware 42 decide the first candidate of the white balance gains based on the set of mixed values. Gain values may be decided such that the values equalize the R, G, and B mixed values.

In step 310, the mixing values obtained in step 290 may be corrected based on the color tone estimation performed in step 250 to improve the performance of the white balancing, especially in case that there is little or no proper reference white available in the scene. For example, the blue sky could be problematic if there are no clearly white clouds but some haze, because those haze could be mistook as white. This correction can improve this kind of situation. The correction may be done by mixing the cumulate values of certain intensity range into the mixing values obtained in step 290. And the candidate white balance gain will be updated based on the new mixing values. The correction in this step may not be performed if it is not necessary.

In step 320, the candidate white balance gains obtained in step 300 or 310 may be corrected by the gray-world algorithm based on the goodness calculation in step 260. The goodness score calculated in step 260 may be used to determine how big weight in the final gains is given for the results that are obtained with the grey world algorithm. If the goodness score for grey world is very high, then e.g. 80% weight is given to grey world part and 20% weight to the gain values obtained by the previous steps. Similarly, if the goodness score for grey world is not very high, then 0% weight is given to grey world and 100% to the results obtained by the previous steps.

To take account the grey world algorithm into account, the correction may be done for the mixing values obtained in step 290. As in step 310, the special cases like the bright blue sky or no white reference may be detected at first, then the mixing is done like as in step 310. The correction in this step may not be performed if it is not necessary.

In step 330, the candidate digital gains or mixed values obtained in previous steps may be adapted according to total exposure (exposure time * analogue gain) in order to give good overall image contrast in bright scenes while still maintaining low noise and good detail preservation in bright image areas in dark scenes. Also aperture, ND-filter, focal length and additional digital pre-gaining may also be considered if those parameters are variable.

The final gains may consist of combination of color balance gains and overall digital gain, i.e. colors are corrected with the gains and also luminance level. The correction in this step may not be performed if it is not necessary or if this step is not implemented in the firmware 42.

In step 340, the final set of digital gains is decided. However, the final gains are still checked whether it is reliable values or not. For example, there may be a possibility of an error if some color component seems to dominate over the other color components. In such case the firmware 42 instructs the CPU 41 to go back to step 270, and select different set of intensity ranges in step 280, and re-do the following steps. If the final gains are reliable, then the gain calculations are finished (step 360). According to the instruction of firmware 42, the CPU 41 sets the digital gains of the AWB amplifier 15 as calculated values.

The preferred embodiment has been tested in the assignee of the present invention. In this test, 248 images that have been captured in a multitude of lighting conditions are used. And the result has shown that it can improve the AWB accuracy a quite a lot.

After step 360, the AWB amplifier may apply the digital gains to the frame of image data used for the gain calculation, or may apply them only to the image data of next frames, i.e. the image data taken by the next shooting. In one embodiment, the camera device 1 may comprise a frame memory which can store a whole of image data which is comprised in a complete frame. In this embodiment the same frame which is used for the white balance calculation can be corrected by the calculated gains. But in another embodiment the camera device 1 may not comprise a memory which can store a whole of frame, but a line buffer which can store only a part of a frame. In this embodiment, the AWB gaining needs to be applied to part of the frame data before the statistics for the whole image data for the frame is available for AWB analysis. Thus the result of AWB analysis from the previous frame (e.g. viewfinder frame) can be used instead. In another words the AWB analysis that is calculated on basis of frame n is used for frame n+1. In these memory-limited systems the WB gains may be applied to the lines in the beginning of the image before the whole image is available for AWB analysis block 17.

Please note that various modifications may be made without departing from the scope of the present invention. For example this invention can be applied not only for the Bayer raw data also different types of image data such as Foveon type data. The calculation of the averages and the maximum luminance value in step 150 may also be done in later step from the R, G, B and luminance histograms.

Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance, it should be understood that the applicant claims protection in respect of any patentable feature of combination of features hereinbefore referred to and/or shown in the drawings whether of not particular emphasis has been placed thereon.

Claims

1. A digital camera system being capable of taking a frame of image data comprising a plurality of color elements, the system being arranged:

to divide the frame image data into a plurality of blocks including a plurality of pixel data;
to calculate a predetermined value for the each color element and for each of all or a part of the plurality of blocks;
to judge, for each of all or a part of the plurality of blocks, whether the block being likely a part of a grey object or not;
to cumulate the predetermined values of the blocks judged as being likely a part of a gray object for the respective color elements, and;
to decide a first set of digital gains for adjusting white balance based on the cumulated values.

2. A digital camera system according to claim 1, wherein the predetermined value is an average value of the color component in the block.

3. A digital camera system according to claim 1, wherein the predetermined value is used for said judgment.

4. A digital camera system according to claim 1, wherein the judgment is performed according to criteria decided based on one or more of histograms of the color elements, average values of the color elements, and a histogram of luminance of the image data.

5. A digital camera system according to claim 4, wherein data of the block containing a saturated pixel is not involved in deciding the criteria.

6. A digital camera system according to claim 4, wherein data of the block adjacent to the block containing a saturated pixel is not involved in deciding the criteria.

7. A digital camera system according to claim 4, wherein the plurality of color elements comprise data representing red color, green color, and blue color, and the decision is contingent on one or more of B/G, R/G, min(R,B)/G, and max(R,B)/G satisfying the criteria

Where:
R represents an average value of the red data in the block to be judged.
G represents an average value of the green data in the block to be judged.
B represents an average value of the blue data in the block to be judged.
min(R,B)/G represents a value dividing a smaller one of the R and the B by the G.
max(R,B)/G represents a value dividing a larger one of the R and the B by the G.

8. A digital camera system according to claim 1, wherein the block containing a saturated pixel is not involved in the cumulation.

9. A digital camera system according to claim 1, wherein the block adjacent to the block containing a saturated pixel is not involved in the cumulation.

10. A digital camera system according to claim 1, wherein the predetermined values of the block which is judged that its color is closer to white or gray than the other blocks are weighted at the cumulation.

11. A digital camera system according to claim 1, wherein the system being arranged:

to define a plurality of intensity ranges;
to perform the gray judgment in the each intensity range; and
to perform the cumulation for the each intensity range.

12. A digital camera system according to claim 11, wherein the intensity ranges are separated based on histograms of the color elements, and/or a histogram of luminance of the image data.

13. A digital camera system according to claim 11, wherein the system being arranged to decide the first set of digital gains based on the cumulated values obtained from only a part of the plurality of the intensity ranges.

14. A digital camera system according to claim 11, wherein the system being arranged to mix the cumulated values obtained from the different intensity ranges for the respective color elements; and to decide the first set of digital gains based on the mixed values.

15. A digital camera system according to claim 14, wherein the system being arranged to perform the mixing by taking account of numbers of blocks whose color is judged as close to white or gray in each intensity range.

16. A digital camera system according to claim 14, wherein the system being arranged to perform the mixing so that the cumulative value of the intensity range having higher luminance has a bigger influence on the mixed value.

17. A digital camera system according to claim 1, wherein the system estimating a color tone of the image data and correcting the first set of digital gains based on the estimation.

18. A digital camera system according to claim 1, wherein the system being arranged to calculate a second set of digital gains for adjusting white balance of the image data based on the Gray-world algorithm, and; to mix the first set of digital gains with the second set of digital gains to decide a final set of digital gains for adjusting white balance of the image data.

19. A digital camera system according to claim 12, wherein the system being arranged to check a reliability of the final set of digital gains, and to calculate the first set of digital gains again by using the cumulated values deferent intensity ranges.

20. An electric circuit for processing a frame of image data comprising a plurality of color elements, wherein the said circuit comprising:

a block extractor extracting a block of the image data, the block including a plurality of pixel data;
an average calculator calculating average values of each color component of the extracted block;
a judging unit judging whether the block being potentially white or not;
a cumulator cumulating the average values of a plurality of the extracted blocks whose color being judged as potentially white for the respective color elements, and;
a decision unit deciding a set of digital gains for adjusting white balance based on the cumulated values.

21. An electric circuit according to claim 20, wherein the circuit comprising a plurality of intensity ranges, and wherein the judging unit is arranged to perform the judgment in the each intensity range, and the cumulator is arranged to perform the cumulation in the each intensity range.

22. An electric circuit according to claim 20, wherein the circuit comprising a processor and a computer program, wherein the program instructs the processor to perform a part of functions of at least one of the block extractor, the judging unit, the cumulator, and the decision unit.

23. A camera device comprising an electric circuit according to claim 20.

Patent History
Publication number: 20070047803
Type: Application
Filed: Aug 30, 2005
Publication Date: Mar 1, 2007
Applicant:
Inventor: Jarno Nikkanen (Tampere)
Application Number: 11/216,272
Classifications
Current U.S. Class: 382/162.000
International Classification: G06K 9/00 (20060101);