Image processing method, image processing apparatus, and electronic camera

- Nikon

An image processing method of the present invention includes a detection step detecting a characteristic area from each of three or more shot images having a common graphic pattern in part thereof, the characteristic area having an image significantly different from the other shot images, and a combining step extracting a partial image located in the characteristic area from each of the three or more shot images and combining these partial images into one image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATION

This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2007-034912, filed on Feb. 15, 2007, the entire contents of which is incorporated herein by reference.

BACKGROUND

1. Field

The present invention relates to an image processing method for image combining, an image processing apparatus provided with an image combining function, and an electronic camera provided with an image combining function.

2. Description of the Related Art

Patent reference 1 (Japanese Unexamined Patent Application Publication No. 2001-28726) discloses an electronic camera provided with an image combining function. The principle of the function is additive average combining of multiple shot images obtained by, for example, continuous shooting. Shooting continuously a moving object (dynamic body) using a tripod and image-combining the obtained multiple shot images allow a still background and a trajectory of the dynamic body to be included in one image.

It should be noted that this image combining does not discriminate the dynamic body from the background and the background is also included in the dynamic body area, and thereby the dynamic body appears to be transparent. To perform combining to make a dynamic body opaque (hereinafter, called “opaque combining”), it is necessary to detect a presence area of dynamic body from each of the shot images and to connect the areas.

However, it is difficult to detect a presence area of dynamic body automatically, and therefore it is currently difficult to realize automatic opaque combining.

SUMMARY

The present invention provides an image processing method capable of performing opaque combining of shot images without fail. The present invention further provides an image processing apparatus and an electronic camera capable of performing opaque combining of shot images without fail.

An image processing method of the present invention includes a detecting step detecting a characteristic area from each of three or more shot images having a common graphic pattern in part thereof, the characteristic area having an image significantly different from the other shot images, and a combining step extracting a partial image located in the characteristic area from each of the three or more shot images and combining these partial images into one image.

Here, the detecting step preferably assumes, in each of the three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as the characteristic area.

Also, the detecting step preferably generates a distribution map of the characteristic area detected from each of the three or more shot images, and the combining step preferably performs the extraction according to the distribution map.

Also, the detecting step preferably performs filter processing on the distribution map for smoothing distribution boundaries.

Also, the combining step may perform weighted average on the partial image extracted from each of the three or more shot images and partial images extracted from the same areas in the other shot images.

Also, the combining step may set a weight of the weighted averaging to be a value specified by a user.

Also, the detecting step preferably performs the detection, instead of using the three or more shot images, using reduced-size versions thereof.

Further, an image processing apparatus of the present invention includes a detecting unit that detects a characteristic area from each of three or more shot images, the characteristic area having an image significantly different from the other shot images, and a combining unit that extracts a partial image located in the characteristic area from each of the three or more shot images and combines these partial images into one image.

Also, the detecting unit preferably assumes, in each of the three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as the characteristic area.

Also, the detecting unit preferably generates a distribution map of the characteristic area detected from each of the three or more shot images and the combining unit preferably performs the extraction according to the distribution map.

Also, the detecting unit preferably performs filter processing on the distribution map for smoothing distribution boundaries.

Also, the combining unit may perform weighted average on the partial image extracted from each of the three or more shot images and partial images extracted from the same areas in the other shot images.

Also, the combining unit may set a weight of the weighted averaging to be a value specified by a user.

Also, the detecting unit preferably performs the detection, instead of using the three or more shot images, using reduced-size versions thereof.

Further, an electronic camera of the present invention includes an imaging unit that shoots an object to obtain a shot image and any one of the image processing apparatus of the present invention to process three or more shot images obtained by the imaging unit.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a diagram illustrating a configuration of an electronic camera;

FIG. 2 is a diagram illustrating examples of shot images specified by a user;

FIG. 3 is an operational flowchart of a combine-processing part 17A;

FIG. 4 is a diagram illustrating generating steps of an index image (up to difference image calculation);

FIG. 5 is a diagram illustrating generating steps of an index image (up to index image calculation);

FIG. 6 is a diagram illustrating an index image Yindex;

FIG. 7 is a diagram illustrating a true index image Iindex;

FIG. 8 is a diagram illustrating the true index image Iindex after filter processing;

FIG. 9 is a diagram illustrating a combining method based on the true index image Iindex (in an opacity of 1); and

FIG. 10 is a diagram illustrating a combining method based on the true index image Iindex (in an arbitrary opacity).

DETAILED DESCRIPTION OF THE EMBODIMENTS

Hereinafter, an embodiment of the present invention will be described. The present embodiment is an embodiment for an electronic camera.

First, a configuration of an electronic camera will be described.

FIG. 1 is a diagram illustrating the configuration of the electronic camera. As shown in FIG. 1, an electronic camera 10 includes a shooting lens 11, an imaging sensor 12, an A/D converter 13, a signal processing circuit 14, a timing generator (TG) 15, a buffer memory 16, a CPU 17, an image processing circuit 18, a displaying circuit 22, a rear monitor 23, a card interface (card I/F) 24, an operating button 25, etc., and a card memory 24A is attached to the card interface 24. Among these, the CPU 17 is capable of performing image-combine processing (to be described below). In the following description, it is assumed that a combine-processing part 17A performing the image-combine processing is included in the CPU 17.

The CPU 17 is connected to the buffer memory 16, the image processing circuit 18, the displaying circuit 22, and the card interface 24 via a bus 19. The CPU 17 sends image data to the image processing circuit 18 by use of the bus 19 and thereby performs normal image processing on a shot image such as pixel interpolation processing and color conversion processing. Also, the CPU 17 sends the image data to the rear monitor 23 via the displaying circuit 22, and thereby displays various images on the rear monitor 23. Further, the CPU 17 writes an image file into the card memory 24A and reads the image file out of the card memory 24A via the card interface 24.

Further, the CPU 17 is connected to the timing generator 15 and the operating button 25. The CPU 17 drives the imaging sensor 12, the A/D converter 13, the signal processing circuit 14, etc. via the timing generator 15, and also recognizes various indications such as mode switching provided by a user via the operating button 25.

Hereinafter, the electronic camera 10 is assumed to have a shooting mode, an editing mode, and the like.

Next, operation of the CPU 17 in the shooting mode will be described.

When the electronic camera 10 is set to be in the shooting mode, the user enters a shooting indication into the CPU 17 by manipulating the operating button 25. The shooting indications include a single shooting indication and a continuous shooting indication, and the CPU 17 discriminates the both indications based on a push-down period length of the operating button 25, and the like.

When the single shooting indication is entered, the CPU 17 drives the imaging sensor 12, the A/D converter 13 and the signal processing circuit 14 once, and obtains an image signal (image data) for one frame of a shot image. The obtained image data is stored in the buffer memory 16.

At this time, the CPU 17 sends the image data of the shot image stored in the buffer memory 16 to the image processing circuit 18 and performs the normal image processing on the shot image, and also generates an image file of the processed shot image to write the image file into the card memory 24A. Thereby the single shooting is completed. Here, the image file has an image storing area and a tag area, and the image data of the shot image is written in the image storing area and information accompanying the shot image is written in the tag area. The accompanying information includes a reduced-size version of image data of a shot image (image data of a thumbnail image).

On the other hand, when the continuous shooting indication is entered, the CPU 17 continuously drives the imaging sensor 12, the A/D converter 13, and the signal processing circuit 14 multiple times, and obtains an image signal (image data) for multiple frames of shot images. The obtained image data is stored in the buffer memory 16.

At this time, the CPU 17 sends the image data of the shot images stored in the buffer memory 16 to the image processing circuit 18 and performs the normal image processing on the shot image, and also generates an image file of the processed shot image to write the image file into the card memory 24A. After this operation has been performed for multiple frames of the shot images, the continuous shooting is completed. Here, each of the image files has an image storing area and a tag area, and the image data of the shot image is written in the image storing area and information accompanying the shot image is written in the tag area. The accompanying information includes a reduced-size version of image data of a shot image (image data of a thumbnail image).

Next, operation of the CPU 17 in the editing mode will be described.

When the electronic camera 10 is set to be in the editing mode, the CPU 17 displays a menu of the editing mode on the rear monitor 23. One of the menu items is “image combining”. This image combining is an image combining capable of the opaque combining.

While the menu is displayed, the user manipulates the operating button 25 to specify a desired item of the menu to the CPU 17. When the user specifies “image combining”, the CPU 17 reads out the image files in the card memory 24A, and sends the image data of the thumbnail images added to the image files to the displaying circuit 22. Thereby, shot images previously obtained are reproduced and displayed on the rear monitor 23. In this reproducing display, a plurality of shot images is preferably displayed in parallel at the same time on the rear monitor 23 so that the user can compare a plurality of the shot images to one another.

While the shot images are reproduced and displayed, the user manipulates the operating button 25 to specify to the CPU 17 desired N shot images Ik (k=1, 2, . . . , N) among the shot images reproduced and displayed. Then, the user manipulates the operating button 25 to specify an opacity αk (k=1, 2, . . . , N) of a dynamic body included in each of the specified N shot images Ik (k=1, 2, . . . , N). The opacity αk is an opacity of a dynamic body Mk included in the shot image Ik.

Here, the N shot images Ik (k=1, 2, . . . , N) specified by the user have a part common to one another (background) except for a dynamic bodies Mk (k=1, 2, . . . , N) as shown in FIG. 2. Such shot images Ik (k=1, 2, . . . , N) are obtained under common shooting conditions (shooting sensitivity, shutter speed, aperture value, and framing).

In the present embodiment, these N shot images Ik (k=1, 2, . . . , N) need not include a shot image in which the dynamic body Mk is not present, but, instead, the number of shot images N is required to be three or more.

Also, pixel coordinates of the dynamic bodies Mk (k=1, 2, . . . , N) included in each of the shot images Ik (k=1, 2, . . . , N) preferably do not overlap with one another. This is because image combining calculation in the present embodiment assumes that pixel coordinates do no overlap.

Also, a range of an opacity αk (k=1, 2, . . . , N) which a user can specify is 0≦αk≦1. For example, for the opaque combining, the user only needs to specify the opacity αk as α123= . . . =αN=1, and for erasing all the dynamic bodies from a combined image, specify the opacity αk as α122= . . . =αN=0.

When the opacity αk is specified as above, the CPU. 17 sends image data of the specified N shot images Ik (k=1, 2, . . . , N) to the combine-processing part 17A and performs image-combine processing on the shot images. The CPU 17 obtains one combined image by this image-combine processing, and then displays the combined image on the rear monitor 23. The CPU 17 also newly generates an image file of the combined image and writes the image file into the memory card 24A.

Next, operation of the combine-processing part 17A will be described in detail. For simplicity of the description, specified shot images are assumed to be three shot images I1, I2, and I3 shown in FIG. 2. In this case, the number of shot images N is three. Also, each shot image Ik is assumed to have a Y component, a Cb component, and a Cr component.

Accordingly, in the description, the Y component of a shot image Ik is denoted by Yk, the Cb component of a shot image Ik is denoted by Cbk, and the Cr component of a shot image Ik is denoted by Crk. Also in the description, a pixel value of an arbitrary image X at a pixel coordinates (i, j) is denoted by X(i, j) and the origin of the pixel coordinates (i, j) is determined to be at upper left corner of an image.

FIG. 3 is an operational flowchart of the combine-processing part 17A. Each step thereof will be described in sequence as follows.

(Steps S1 to S3)

The combine-processing part 17A performs size reduction processing on each of the shot images I1, I2, and I3 in order to improve processing speed of the image-combine processing. For achieving a size reduction ratio of 16, the size reduction processing is performed using the following formulas, for example.

Y k ( i , j ) = ( y = 4 4 j + 3 jx = 4 i 4 i + 3 Y k ( x , y ) ) / 16 Cb k ( i , j ) = ( y = 4 4 j + 3 jx = 4 i 4 i + 3 Cb k ( x , y ) ) / 16 Cr k ( i , j ) = ( y = 4 4 j + 3 jx = 4 i 4 i + 3 Cr k ( x , y ) ) / 16 ( Formula 1 )

(Step S4)

The combine-processing part 17A generates an average image Iave of the shot images I1, I2, and I3 after the size reduction processing. Yave: the Y component of the average image Iave, Cbave: the Cb component of the average image Iave, and Crave: the Cr component of the average image Iave, are calculated by the following formulas.

Y ave ( i , j ) = ( k = 1 N Y k ( i , j ) ) / N Cb ave ( i , j ) = ( k = 1 N Cb k ( i , j ) ) / N Cr ave ( i , j ) = ( k = 1 N Cr k ( i , j ) ) / N ( Formula 2 )

(Step S5)

The combine-processing part 17A generates an index image Yindex of the Y component, an index image Cbindex of the Cb component, and an index image Crindex of the Cr component as provisional index images, respectively. Representing these index images, a generation step of the index image Yindex will be described.

First, the combine-processing part 17A focuses on the shot image 11 as shown in FIG. 4, and calculates a difference image ΔY1 between the Y component image Y1 of the focused shot image I1 and an average image of the Y components Y2 and Y3 of the other shot images I2 and I3.

Similarly, the combine-processing part 17A focuses on the shot image I2, and calculates a difference image ΔY2 between the Y component image Y2 of the focused shot image I2 and an average image of the Y components Y1 and Y3 of the other shot images I1 and I3.

Similarly, the combine-processing part 17A focuses on the shot image I3, and calculates a difference image ΔY3 between the Y component image Y3 of the focused shot image I3 and an average image of the Y components Y1 and Y2 of the other shot images I1 and I2.

Thereby, the difference image ΔY1 regarding the shot image I1, the difference image ΔY2 regarding the shot image I2, and the difference image ΔY3 regarding the shot image I3 are obtained. Here, the difference images ΔY1, ΔY2, and ΔY3 are calculated using the following formula.


ΔYk(i, j)=|(Yave(i, jN−Yk(i, j))/(N−1)−Yk(i, j)|  (Formula 3)

In this formula, “k” is an image number of a focused shot image, and (Yave×N−Yk)/(N−1) is the average image of the other shot images.

Subsequently, the combine-processing part 17A compares the difference images ΔY1, ΔY2, and ΔY3 for every pixel coordinates as shown in FIG. 5.

Then, the combine-processing part 17A determines an area as a characteristic area A1, the area being in the difference image ΔY1 and whose pixel value is larger than the pixel values of the same areas in the other difference images ΔY2, and ΔY3. This characteristic area A1 is an area having an outstanding pixel value in the shot image I1 compared with the other shot images I2 and I3. Therefore, this characteristic area A1 can be assumed to be a presence area of a dynamic body M1.

Also, the combine-processing part 17A determines an area as a characteristic area A2, the area being in the difference image ΔY2 and whose pixel value is larger than the pixel values of the same areas in the other difference images ΔY1, and ΔY3. This characteristic area A2 is an area having an outstanding pixel value in the shot image I2 compared with the other shot images I1 and I3. Therefore, this characteristic area A2 can be assumed to be a presence area of a dynamic body M2.

Also, the combine-processing part 17A determines an area as a characteristic area A3, the area being in the difference image ΔY3 and whose pixel value is larger than the pixel values of the same areas in the other difference images ΔY1, and ΔY2. This characteristic area A3 is an area having an outstanding pixel value in the shot image I3 compared with the other shot images I1 and I2. Therefore, this characteristic area A3 can be assumed to be a presence area of a dynamic body M3.

Then, the combine-processing part 17A generates one index image Yindex as a distribution map of these characteristic areas A1, A2, and A3.

The characteristic area A1 in this index image Yindex is provided with a pixel value “1” as same as the image number of the shot image I1 and the dynamic body M1, the characteristic area A2 in the index image Yindex is provided with a pixel value “2” as same as the image number of the shot image I2 and the dynamic body M2, and the characteristic area A3 in the index image Yindex is provided with a pixel value “3” as same as the image number of the shot image I3 and the dynamic body M3.

Such an index image Yindex is calculated by use of the following formula.


Yindex(i, j)=k·of·max[ΔY1(i, j), . . . , ΔYN(i, j)]  (Formula 4)

In this formula, k·of·max[x1, x2, . . . , xN] is an element number k of the largest element xk among N elements x1, x2, . . . , xN.

FIG. 6 is a diagram illustrating the index image Yindex. As shown in FIG. 6, in the index image Yindex, many pixels located in the presence area of the dynamic body M1 (refer to FIG. 2) have a pixel value “1”, many pixels located in the presence area of the dynamic body M2 (refer to FIG. 2) have a pixel value “2”, and many pixels located in the presence area of the dynamic body M3 (refer to FIG. 2) have a pixel value “3”. Meanwhile, in the other areas in the index image Yindex, a pixel having a pixel value “1”, a pixel having a pixel value “2”, and a pixel having a pixel value “3” are mixed.

Therefore, a distribution relationship among the dynamic bodies M1, M2, and M3 (refer to FIG. 2) is reflected to the index image Yindex with a certain accuracy. Using this index image Yindex enables the dynamic bodies M1, M2, and M3 to be extracted from the shot images I1, I2, and I3, respectively.

However, in this index image Yindex there exists an indefinite portion such as a portion enclosed by a dotted line in FIG. 6. The reason is probably that the luminance of a part of the dynamic body M3 (refer to FIG. 2) is close to the luminance of the background and false detection occurred so that this part was determined to be the characteristic area A1 or the characteristics area A2, rather than determined to be the characteristic area A3.

Accordingly, the combine-processing part 17A in the present step performs the following processing when detecting the characteristic areas A1, A2, and A3 in the calculation of the index image Yindex (refer to FIG. 5).

That is, the combine-processing part 17A compares pixel values in the characteristic area A1 of the difference image ΔY1, the characteristic area A2 of the difference image ΔY2, and the characteristic area A3 of the difference image ΔY3 with a threshold value thY, and assumes an area having pixel value smaller than the threshold value thY to be a particular area A0 which does not belong to any of the characteristic areas A1, A2, and A3. Then, the combine-processing part 17A assigns the particular area A0 of the index image Yindex with a particular value except for “1”, “2”, and “3” (hereinafter, “0”). This particular value is replaced with an appropriate value in the next step.

In the present step, the index image Cbindex of the Cb component and the index image Crindex of the Cr component are also generated, and this is performed for the purpose of this replacement.

Here, the index image Cbindex is calculated by use of the following formulas.


ΔCbk(i, j)=|(Cbave(i, jN−Cbk(i, j))/(N−1)−Cbk(i, j)|


Cbindex(i, j)=k·of·max[ΔCb1(i, j), . . . , ΔCbN(i, j)]  (Formula 5)

Also, when calculating this index image Cbindex, the combine-processing part 17A performs the following processing.

That is, the combine-processing part 17A compares pixel values in a characteristic area A1 of a difference image ΔCb1, a characteristic area A2 of a difference image ΔCb2, and a characteristic area A3 of a difference image ΔCb3 with a threshold value thCb, and assumes an area having a pixel value smaller than the threshold value thCb to be a particular area A0 which does not belong to any of the characteristic areas A1, A2, and A3. Then the combine-processing part 17A assigns the particular area A0 in the index image Cbindex with a particular value except for “1”, “2”, and “3” (hereinafter, “0”).

Also, the index image Crindex is calculated by use of the following formulas.


ΔCrk(i, j)=|(Crave(i, jN−Crk(i, j))/(N−1)−Crk(i, j)|


Cindex(i, j)=k·of·max[ΔCr1(i, j), . . . , ΔCrN(i, j)]  (Formula 6)

Also, when calculating this index image Crindex, the combine-processing part 17A performs the following processing.

That is, the combine-processing part 17A compares pixel values in a characteristic area A1 of a difference image ΔCr1, a characteristic area A2 of a difference image ΔCr2, and a characteristic area A3 of a difference image ΔCr3 with a threshold value thCr, and assumes an area having a pixel value smaller than the threshold value thCr to be a particular area A0 which does not belong to any of the characteristic areas A1, A2, and A3. Then the combine-processing part 17A assigns the particular area A0 in the index image Crindex with a particular value except for “1”, “2”, and “3” (hereinafter, “0”).

(Step S6)

The combine processing part 17A determines whether a pixel having the particular value “0” exists in the index image Yindex, and, if it exists, replaces the pixel value with a pixel value at the same pixel coordinates in the index image Cbindex.

Further, the combine-processing part 17A determines whether a pixel having the particular value “0” remains in the index image Yindex after the replacement, and if it remains, replaces the pixel value with a pixel value at the same coordinates in the index image Crindex. The index image Yindex after these replacements is determined to be a true index image Iindex (refer to FIG. 7).

Accordingly, in the present step, indefiniteness of the index image Yindex is compensated by the other index images Cbindex and Crindex, and the more definite true index image Iindex is obtained.

Therefore, even if the luminance of a part of a dynamic body Mk included in a certain shot image Ik is close to the luminance of a background thereof, the true index image Iindex is compensated to become more definite as long as the color of the part is different from the color of the background.

Here, even in the true index image Iindex after the compensation, there still remains a possibility that a pixel having a particular value “0” exists. This is because there can exist a part of the dynamic body which is similar to the background thereof in both luminance and color.

(Step S7)

The combine-processing part 17A performs filter processing on the true index image Iindex. A filter used at this time is a filter smoothing boundary lines of the characteristic areas A1, A2, and A3, and, for example, a majority filter of 3×3 pixels. The majority filter has a function to replace a pixel value of a focused pixel with a mode value of pixel values of peripheral pixels thereof.

Here, when the mode value becomes the particular value “0” during this filter processing, the combine-processing part 17A replaces the pixel value of the focused pixel with a second mode value, not the mode value. This replacement eliminates a pixel having the particular value “0” from the true index image Iindex.

The filter processing on the true index image Iindex may be performed once, but preferably performed appropriate number of times of two or more. As the filter processing is repeated, pixels having the same pixel value (each of the characteristic areas A1, A2, and A3) on the true index image Iindex gather together gradually. Therefore, in the true index image Iindex after the filter processing, the characteristic area A1 covers the entire presence area of the dynamic body M1 (refer to FIG. 2), the characteristic area A2 covers the entire presence area of the dynamic body M2 (refer to FIG. 2), and the characteristic area A3 covers the entire presence area of the dynamic body M3 (refer to FIG. 2), as shown in FIG. 8.

In such a true index image Iindex, boundary lines between the characteristic areas A1, A2, and A3 are not always located at a boundary area between the background and the dynamic body M1, a boundary area between the background and the dynamic body M2, or a boundary area between the background and the dynamic body M3, but located definitely in a boundary area between the dynamic body M1 and the dynamic body M2, a boundary area between the dynamic body M2 and the dynamic body M3, and a boundary area between the dynamic body M1 and the dynamic body M3. Here, in the present step, instead of increase in the number of executions of the filter processing, a filter diameter for the filter processing may be increased.

(Step S8)

The combine-processing part 17A performs size enlargement processing on the true index image Iindex after the filter processing, and makes a size thereof as same as that of the original shot images I1, I2, and I3. If the size reduction ratio in the steps S1 to S3 described hereinabove is 16, the size enlargement processing uses, for example, the following formulas, where a true index image after the enlargement processing is denoted by I′index. In fact, the size enlargement processing using the following formulas is a padding processing.

I index ( 4 j , 4 i ) = I index ( i , j ) I index ( 4 j , 4 i + 1 ) = I index ( i , j ) I index ( 4 j , 4 i + 2 ) = I index ( i , j ) I index ( 4 j , 4 i + 3 ) = I index ( i , j ) I index ( 4 j + 1 , 4 i ) = I index ( i , j ) I index ( 4 j + 1 , 4 i + 1 ) = I index ( i , j ) I index ( 4 j + 3 , 4 i + 3 ) = I index ( i , j ) ( Formula 7 )

Here, the true index image after the enlargement processing I′index is used in the next step as a source map for extracting a partial image from each of the shot images I1, I2, and I3.

For this purpose, a pixel value in the true index image I′index should be anyone of “1”, “2”, and “3” and should not be an intermediate value among “1”, “2”, and “3”. Therefore, in the size enlargement processing in the present step, an average interpolation should not be applied, and, if interpolation is applied, it should be a majority interpolation.

(Step S9)

The combine-processing part 17A, on the basis of the true index image I′index, extracts a partial image I1·A1 located in the characteristic area A1 from the shot image I1, a partial image I2·A2 located in the characteristic area A2from the shot image I2, and a partial image I3·A3 located in the characteristic area A3 from the shot image I3, as shown in FIG. 9. Then, the combine-processing part 17A combines the three extracted partial images I1·A1, I2·A2, and I3·A3 to obtain one combined image I. Here, a partial image located in an area A of an image X is represented by X·A.

At this time, the combine-processing part 17A sets an opacity of the partial image I1·A1, an opacity of the partial image I2·A2, and an opacity of the partial image I3·A3 to be α1, α2, and α3, respectively. These values α1, α2, and α3 are the opacities specified by the user for the dynamic bodies M1, M2, and M3 (refer to FIG. 2). This concept is illustrated as shown in FIG. 10.

That is, regarding the characteristic area A1, weighted averaging is performed on the shot image I1 and an average image (I2+I3)/2 of the other shot images I2 and I3. A weight ratio of the weighted averaging is set to be α1:(1−α1) according to the user's specification.

Focusing on this characteristic area A1, there exists not only the dynamic body M1 but also the background portion in the shot image I1, but the graphic pattern of the background portion is as same as that of the average image {(I2+I3)/2} of the other shot images I2 and I3. Therefore, a graphic pattern of the characteristic area A1 in the combined image I becomes the background on which the dynamic body M1 is superimposed with an opacity of α1.

Also, regarding the characteristic area A2, weighted averaging is performed on the shot image I2 and an average image (I1+I3)/2 of the other shot images I1 and I3. A weight ratio of the weighted averaging is set to be α2:(1−α2) according to the user's specification. Focusing on this characteristic area A2, there exists not only the dynamic body M2 but also the background portion in the shot image I2, but the graphic pattern of the background portion is as same as that in the average image {(I1+I3)/2} of the other shot images I1 and I3.

Therefore, a graphic pattern of the characteristic area A2 in the combined image I becomes the background on which the dynamic body M2 is superimposed on with an opacity of α2. Also, regarding the characteristic area A3, weighted averaging is performed on the shot image I3 and an average image (I1+I2)/2 of the other shot images I1 and I2. A weight ratio of the weighted averaging is set to be α3:(1−α3) according to the user's specification.

Focusing on this characteristic area A3, there exists not only the dynamic body M3 but also the background portion in the shot image I3, but the graphic pattern of the background portion is as same as that in the average image {(I1+I2)/2} of the other shot images I1 and I2.

Therefore, a graphic pattern of the characteristic area A3 in the combined image I becomes the background on which the dynamic body M3 is superimposed with an opacity of α3.

Here, the combining of the shot images I1, I2, and I3 in the present step is performed for each component of the shot images I1, I2, and I3, and the common true index image I′index is used for the combining of each component. When a Y component of the combined image I is denoted by Y, a Cb component of the combined image I is denoted by Cb, and a Cr component of the combined image I is denoted by Cr, combining each component uses the following formulas.


k(i, j)=I′index(i, j)


Y(i, j) =αk(i, j)×Yk(i, j)(i, j)+(1−αk(i,j))×(Yave(i, jN−Yk(i,j)(i, j))/(N −1)


Cb(i, j)=αk(i,j)×Cbk(i,j)(i, j)+(1−αk(i,j))×(Cbave(i, jN−Cbk(i,j)(i, j))/(N−1)


Cr(i, j)=αk(i,j)×Crk(i,j)(i, j)+(1−αk(i,j))×(Crave(i, jN−Crk(i,j)(i, j))/(N−1)   (Formula 8)

That is, a pixel value Y (i, j) at a certain pixel coordinates (i, j) of the combined image I is determined to be a weighted-average of a pixel value Y1 (i, j), Y2 (i, j), and Y3 (i, j) at the same pixel coordinates in the shot images I1, I2, and I3 according to a pixel value I′index (i, j) at the same pixel coordinates (i, j) in the true index image I′index. For example, when the pixel value I′index (i, j) of the true index image I′index is “1”, a weighted average of the pixel value Y1 (i, j) and an average value of the Y2 (i, j) and Y3 (i, j) is calculated with a weight ratio of α1:(1−α1). Also, for example, when the pixel value I′index (i, j) of the true index image I′index is “2”, a weighted average of the pixel value Y2 (i, j) and an average value of the Y1 (i, j) and Y3 (i, j) is calculated with a weight ratio of α2:(1−α2).

Similarly, a pixel value Cb(i, j) of a certain pixel of the combined image I is determined to be a weighted-average of a pixel value Cb1 (i, j), Cb2 (i, j), and Cb3 (i, j) at the same pixel coordinates in the shot images I1, I2, and I3 according to a pixel value I′index (i, j) at the same pixel coordinates (i, j) in the true index image I′index. For example, when the pixel value I′index (i, j) of the true index image I′index is “1”, a weighted average of the pixel value Cb1 (i, j) and an average value of the Cb2 (i, j) and Cb3 (i, j) is calculated with a weight ratio of α1: (1−α1). Also, for example, when the pixel value I′index (i, j) of the true index image I′index is “2”, a weighted average of the pixel value Cb2 (i, j) and an average value of the Cb1 (i, j) and Cb3 (i, j) is calculated with a weight ratio of α2:(1−α2).

Similarly, a pixel value Cr(i, j) of a certain pixel of the combined image I is determined to be a weighted-average of a pixel value Cr1 (i, j), Cr2 (i, j), and Cr3 (i, j) at the same pixel coordinates in the shot images I1, I2, and I3 according to a pixel value I′index (i, j) at the same pixel coordinates (i, j) in the true index image I′index. For example, when the pixel value I′index (i, j) of the true index image I′index is “1”, a weighted average of the pixel value Cr1 (i, j) and an average value of the Cr2 (i, j) and Cr3 (i, j) is calculated with a weight ratio of α1:(1−α1). Also, for example, when the pixel value I′index (i, j) of the true index image I′index is “2”, a weighted average of the pixel value Cr2 (i, j) and an average value of the Cr1 (i, j) and Cr3 (i, j) is calculated with a weight ratio of α2:(1−α2) (That is the description of the step S9.).

As a result, the combine-processing part 17A, while managing opacities of the dynamic bodies M1, M2, and M3 included in the shot images I1, I2, and I3 to be α1, α2, and α3 specified by the user, respectively, can combine these shot images I1, I2, and I3 correctly. When the opacities are set to be “1”, the combining may become the opaque combining.

Here, while the user in the present embodiment specified the opacities of all the dynamic bodies, the opacities of some or all of the dynamic bodies may not be specified. In this case, the CPU 17 sets the opacity which is not specified to be a default value (e.g., “1”).

Also, although the combine-processing part 17A in the present embodiment eliminated the particular value “0” from the true index image Iindex during the filtering processing (step S7), the processing associated with this elimination may be omitted. Note that, in this case, the combine-processing part 17A needs to replace the particular value “0” with a non-particular value (any one of “1”, “2”, and “3”) in the end of the step S7.

For example, the combine-processing part 17A replaces the particular value “0” with “1” when the shot image I1 takes priority, the particular value “0” with “2” when the shot image I2 takes priority, and the particular value “0” with “3” when the shot image I3 takes priority. Here, which of the shot images I1, I2, and I3 takes priority may be specified by the user in advance or determined automatically by the combine-processing part 17A through its evaluation of the shot images I1, I2, and I3.

Also, while the combine-processing part 17A in the present embodiment performed the size reduction processing (steps S1 to S3) on the shot images to improve the processing speed in the image-combine processing, the size reduction processing (steps S1 to S3) may be omitted and, instead, the thumbnail image of the shot image (stored in the same image file as the shot image) may be used.

Also, while the combine-processing part 17A in the present embodiment performed the size reduction processing (steps S1 to S3) on the shot images to improve the processing speed in the image-combine processing, the size reduction processing (steps S1 to S3) may be omitted when a slow processing speed does not matter. In this case, the size enlargement processing (step S8) is also omitted.

Although, in the electronic camera 10 of the present embodiment, the whole image-combine processing was performed by the CPU 17, a part or whole of the image-combine processing may be performed by a dedicated circuit except for the CPU 17 or the image processing circuit 18.

Also, some or all of the functions of the image-combine processing in the electronic camera 10 of the present embodiment may be provided in the other apparatus having a user interface and a monitor such as an image storage or a printer.

Also, some or all of the functions of the image-combine processing in the electronic camera 10 of the present embodiment may be performed by a computer. When performed by a computer, a program for the purpose (image-combine processing program) is stored in a memory of the computer (hard disk drive or the like). Install of the image-combine processing program into the hard disk drive is performed, for example, via the Internet or a recording medium such as a CD-ROM.

The many features and advantages of the embodiments are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the embodiments that fall within the true spirit and scope thereof. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the inventive embodiments to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope thereof.

Claims

1. An image processing method, comprising:

a detecting step detecting a characteristic area from each of three or more shot images having a common graphic pattern in part thereof, said characteristic area having an image significantly different from the other shot images; and
a combining step extracting a partial image located in said characteristic area from each of said three or more shot images and combining these partial images into one image.

2. The image processing method according to claim 1, wherein

said detecting step assumes, in each of said three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as said characteristic area.

3. The image processing method according to claim 1, wherein:

said detecting step generates a distribution map of said characteristic area detected from each of said three or more shot images; and
said combining step performs said extracting according to said distribution map.

4. The image processing method according to claim 3, wherein

said detecting step performs filter processing on said distribution map for smoothing distribution boundaries.

5. The image processing method according to claim 1, wherein

said combining step performs weighted average on the partial image extracted from each of said three or more shot images and partial images extracted from the same areas in the other shot images.

6. The image processing method according to claim 5, wherein

said combining step sets a weight of said weighted averaging to be a value specified by a user.

7. The image processing method according to claim 1, wherein

said detecting step performs said detecting, instead of using said three or more shot images, using reduced-size versions thereof.

8. An image processing apparatus, comprising:

a detecting unit detecting a characteristic area from each of three or more shot images, said characteristic area having an image significantly different from the other shot images; and
a combining unit extracting a partial image located in said characteristic area from each of said three or more shot images and combines these partial images into one image.

9. The image processing apparatus according to claim 8, wherein

said detecting unit assumes, in each of said three or more shot images, an area which has an image significantly different from an averaged image of the other shot images as said characteristic area.

10. The image processing apparatus according to claim 8, wherein:

said detecting unit generates a distribution map of said characteristic area detected from each of said three or more shot images; and
said combining unit performs said extracting according to said distribution map.

11. The image processing apparatus according to claim 10, wherein

said detecting unit performs filter processing on said distribution map for smoothing distribution boundaries.

12. The image processing apparatus according to claim 8, wherein

said combining unit performs weighted average on the partial image extracted from each of said three or more shot images and partial images extracted from the same areas in the other shot images.

13. The image processing apparatus according to claim 12, wherein

said combining unit sets a weight of said weighted averaging to be a value specified by a user.

14. The image processing apparatus according to claim 8, wherein

said detecting unit performs said detecting, instead of using said three or more shot images, using reduced-size versions thereof.

15. An electronic camera, comprising:

an imaging unit that shoots an object to obtain a shot image; and
an image processing apparatus according to claim 8 that processes three or more shot images obtained by said imaging unit.
Patent History
Publication number: 20080199103
Type: Application
Filed: Feb 6, 2008
Publication Date: Aug 21, 2008
Applicant: NIKON CORPORATION (Tokyo)
Inventor: Kazuto Kawahara (Sagamihara-shi)
Application Number: 12/068,431
Classifications
Current U.S. Class: Combining Image Portions (e.g., Portions Of Oversized Documents) (382/284)
International Classification: G06K 9/36 (20060101);