IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
A plurality of representative filters are held, and weight vectors containing weight values for the respective representative filters as components are acquired for respective pixels which form an image. The respective representative filters act on the respective pixels which form the image, and the results of the action are weighted with the weight vectors and added.
Latest Canon Patents:
- MEDICAL DATA PROCESSING APPARATUS, MAGNETIC RESONANCE IMAGING APPARATUS, AND LEARNED MODEL GENERATING METHOD
- METHOD AND APPARATUS FOR SCATTER ESTIMATION IN COMPUTED TOMOGRAPHY IMAGING SYSTEMS
- DETECTOR RESPONSE CALIBARATION DATA WEIGHT OPTIMIZATION METHOD FOR A PHOTON COUNTING X-RAY IMAGING SYSTEM
- INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
- X-RAY DIAGNOSIS APPARATUS AND CONSOLE APPARATUS
1. Field of the Invention
The present invention relates to a filter processing technique.
2. Description of the Related Art
Recently, digital cameras have achieved higher image qualities of more than 20,000,000 pixels, and can capture high-quality images at high resolution. However, while the user presses the shutter, the camera may shake or a blur may occur owing to the optical system or out-of-focusing. In this case, the image quality degrades and the advantage of high image quality cannot be fully exploited.
To improve poor image quality, there is proposed a method for recovering an image by applying, to the image, filters different between respective pixels (areas), such as a shake filter against camera shake, an optical filter corresponding to the image height of a lens, and a blur filter corresponding to the object distance (patent literature 1 (Japanese Patent Laid-Open No. 2005-63323)).
There is also proposed a method of holding not all filters different between respective pixels but only representative filters, and generating the remaining filters by linear interpolation (patent literature 2 (Japanese Patent Laid-Open No. 2005-31759)).
However, the method in patent literature 1 needs to hold all filters different between respective pixels and thus requires a large memory capacity. The method in patent literature 2 generates a filter between representative filters simply by linear interpolation. However, linear interpolation decides the weight based on the distance between filters. When the filter characteristic does not change depending on the distance, no intermediate filter can be generated at high precision. Also, the method in patent literature 2 can reduce the data amount of filters to be held, but cannot decrease the operation amount used when filters act on an image.
SUMMARY OF THE INVENTIONThe present invention has been made to overcome the conventional drawbacks, and provides a technique for reducing the data amount of filters to be held and reducing the operation amount used when filters act on an image.
According to the first aspect of the present invention, an image processing apparatus comprises: a holding unit that holds a plurality of representative filters; a unit that acquires, for respective pixels which form an image, weight vectors containing weight values for the respective representative filters as components; a unit that causes the respective representative filters to act on the respective pixels which form the image; and a unit that weights results of the action with the weight vectors and adds the results.
According to the second aspect of the present invention, an image processing method to be performed by an image processing apparatus which holds a plurality of representative filters, comprises the steps of: acquiring, for respective pixels which form an image, weight vectors containing weight values for the respective representative filters as components; causing the respective representative filters to act on the respective pixels which form the image; and weighting results of the action with the weight vectors and adding the results.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Embodiments of the present invention will now be described with reference to the accompanying drawings. Embodiments to be described hereinafter are merely examples when the present invention is practiced, and are some of practical embodiments of the arrangements described in the scope of the claims.
First Embodiment<Example of Functional Arrangement of Image Processing Apparatus>
The functional arrangement of an image processing apparatus according to the first embodiment will be exemplified with reference to the block diagram of
In the embodiment, however, the respective arrangements shown in
First, the arrangement shown in
Filters to be applied to pixels are registered in a filter database 110 for respective pixels which form an image (filter holding). The filter database 110 may be arranged in the image processing apparatus or an external apparatus. A representative filter acquisition unit 100 includes an importance calculation unit 103, an acquisition unit 104, and terminals 101, 102, and 105.
The importance calculation unit 103 reads out, via the terminal 101, filters for respective pixels that are registered in the filter database 110. The importance calculation unit 103 calculates importances for the respective readout filters. Processing of calculating an importance will be described later.
The acquisition unit 104 acquires the number N of representative filters via the terminal 102. The number N of representative filters may be held in advance in the image processing apparatus, input by the user, or acquired from an external apparatus. The acquisition unit 104 selects N filters as representative filters in descending order of importance from the filters read out from the filter database 110 by the importance calculation unit 103. At this time, if not all the filters selected as representative filters are equal in size, for example, all the representative filters are resized to the size of a representative filter of highest importance. Then, the acquisition unit 104 registers the selected representative filters in a representative filter database 120 via the terminal 105.
Next, the arrangement shown in
The correlation vector calculation unit 203 reads out, via the terminal 201, filters for respective pixels that are registered in the filter database 110. Also, the correlation vector calculation unit 203 reads out, via the terminal 202, respective representative filters registered in the representative filter database 120. For respective pixels that form an image, the correlation vector calculation unit 203 calculates correlation vectors b indicating correlations between the filters for pixels and the respective representative filters.
The correlation matrix calculation unit 204 reads out, via the terminal 202, respective representative filters registered in the representative filter database 120. The correlation matrix calculation unit 204 calculates a correlation matrix A indicating correlations between the representative filters.
The equation solution derivation unit 205 solves simultaneous equations Aw=b for respective pixels which form an image, obtaining weight vectors w for the respective pixels which form the image. For example, the weight vector w generated for a pixel of interest is a vector by which the correlation matrix A is multiplied in order to obtain the correlation vector b obtained for the pixel of interest and which contains weight values for respective representative filters as components. The equation solution derivation unit 205 registers the weight vectors w obtained for respective pixels as weight data in a weight coefficient database 210 via the terminal 206. Methods of obtaining the correlation vector b, correlation matrix A, and weight vector w, and details of them will be described later.
The arrangement shown in
The weight coefficient multiplication unit 304 acquires image data as an input image via the terminal 301. The weight coefficient multiplication unit 304 multiplies the acquired input image by the weight value of each representative filter registered in the weight coefficient database 210, generating a weight coefficient-multiplied image for each representative filter.
The convolution operation unit 305 executes a convolution operation between a representative filter registered in the representative filter database 120 and the weight coefficient-multiplied image generated by the weight coefficient multiplication unit 304 for the representative filter, generating a convoluted image for each representative filter.
Every time the convolution operation unit 305 generates a convoluted image, the composition unit 306 composites the generated convoluted image with already-generated convoluted images. As a result, a composite image of all convoluted images generated up to this time is generated. Needless to say, the composition timing is not limited to this, and after generating convoluted images for respective representative filters, these convoluted images may be composited simultaneously.
The end determination unit 307 controls the convolution operation unit 305 and composition unit 306 to repeat, by the number N of representative filters, a series of operations of “generating a convoluted image and compositing it with already-generated convoluted images” by the convolution operation unit 305 and composition unit 306. The end determination unit 307 outputs, a composite image generated by repeating the series of operations by the number N of representative filters as image data via the terminal 308. The composite image output destination is not particularly limited, and the composite image may be stored in a memory, output to an external apparatus, or displayed on the display device.
<Processing by Image Processing Apparatus>
Processing by the representative filter acquisition unit 100 will be described in more detail with reference to
In step S402, the importance calculation unit 103 reads out a filter corresponding to the coordinate position (x,y) from the filter database 110 via the terminal 101. This filter is, for example, a 3×3 filter, as shown in
In step S404, the importance calculation unit 103 determines whether the processes in steps S402 and S403 have been done for all coordinate positions (all pixels). This determination can be made by determining whether x=X−1 and y=Y−1. As a result of the determination, if the importance calculation unit 103 determines that the processes in steps S402 and S403 have been done for all coordinate positions, the process advances to step S406. If the importance calculation unit 103 determines that there is a coordinate position unprocessed in steps S402 and S403, the process advances to step S405.
In step S405, the importance calculation unit 103 increments the value of the variable x or y by one to update it, and sets an unprocessed coordinate position as the coordinate position of a pixel of interest. Assume that the coordinate position is updated from upper left to lower right of the image. The processes in step S402 and subsequent steps are executed for the updated coordinate position.
In step S406, the acquisition unit 104 acquires the number N of representative filters via the terminal 102 and selects N filters as representative filters from filters for respective pixels in descending order of importance. In step S407, the acquisition unit 104 registers the N selected filters in the representative filter database 120 via the terminal 105.
Processing by the weight coefficient calculation unit 200 will be explained in more detail with reference to
In step S601, the weight coefficient calculation unit 200 initializes, to 0, the variables x and y used as the coordinate position of a pixel of interest in an image of X pixels in the x direction×Y pixels in the y direction. Assume that the coordinate position (x,y)=(0,0) indicates the coordinate position of the upper left corner of the image.
In step S602, the correlation vector calculation unit 203 reads out a filter corresponding to the coordinate position (x,y) from the filter database 110 via the terminal 201.
In step S603, the correlation vector calculation unit 203 first reads out respective representative filters registered in the representative filter database 120 via the terminal 202. Then, the correlation vector calculation unit 203 calculates the correlation vectors b indicating correlations between the filter read out in step S602 and the respective representative filters read out in this step. A method of calculating the correlation vector b will be described later.
In step S604, the correlation matrix calculation unit 204 first reads out, via the terminal 202, respective representative filters registered in the representative filter database 120. Then, the correlation matrix calculation unit 204 calculates the correlation matrix A indicating correlations between the representative filters. A method of calculating the correlation matrix A will be described later. In step S605, the equation solution derivation unit 205 obtains the weight vector w by solving equation (1):
Aw=b (1)
The obtained weight vector w is a vector by which the correlation matrix A calculated in step S604 is multiplied in order to obtain the correlation vector b calculated in step S603 and contains weight values for respective representative filters as components. In the embodiment, the weight vector w is a vector having N weight values as elements. The ith (1≦i≦N) weight value is a weight value for the ith representative filter (representative filter fi). In this manner, the weight vector w is obtained for each coordinate position and can be represented as w=w(x,y) (0≦x≦X−1, 0≦y≦Y−1).
In step S606, the equation solution derivation unit 205 registers the weight vector w obtained in step S605 in the weight coefficient database 210 via the terminal 206. In step S607, the weight coefficient calculation unit 200 determines whether the processes in steps S602 to S606 have been done for all coordinate positions (all pixels). This determination can be made by determining whether x=X−1 and y=Y−1. As a result of the determination, if the weight coefficient calculation unit 200 determines that the processes in steps S602 to S606 have been done for all coordinate positions, the process ends. If the weight coefficient calculation unit 200 determines that there is a coordinate position unprocessed in steps S602 to S606, the process advances to step S608.
In step S608, the weight coefficient calculation unit 200 increments the value of the variable x or y by one to update it, and sets an unprocessed coordinate position as the coordinate position of a pixel of interest. Assume that the coordinate position is updated from upper left to lower right of the image. The processes in step S602 and subsequent steps are executed for the updated coordinate position.
Processing by the filter operation unit 300 will be described with reference to
In step S702, the weight coefficient multiplication unit 304 first acquires, via the terminal 301, an input image of X pixels in the x direction×Y pixels in the y direction. Further, the weight coefficient multiplication unit 304 acquires the ith element (weight value) from the weight vector w obtained for each pixel position in the weight coefficient database 210. For example, wi(x,y) is acquired from the weight vector w(x, y) at the pixel position (x, y)={w1(x,y), . . . , wi(x,y), . . . , wN(x,y)}.
Then, the weight coefficient multiplication unit 304 calculates Ji(x′,y′)=I(x′,y′)×wi(x′,y′) using the pixel value I(x′,y′) and the weight value wi(x′,y′) at the pixel position (x′,y′) in the input image. By executing this calculation for all x's and y's which satisfy 0≦x′≦X−1 and 0≦y′≦Y−1, a weight coefficient-multiplied image Ji can be generated by multiplying the input image by a weight value serving as the ith element of each weight vector. Note that Ji(x′,y′) is a pixel value at the pixel position (x′,y′) in the weight coefficient-multiplied image Ji.
In step S703, the convolution operation unit 305 first reads out the representative filter fi from the representative filter database 120. The convolution operation unit 305 then performs a convolution operation between Ji(x′,y′) and fi(x-x′,y-y′) (calculation of Ji(x,y)*fi(x,y): * is an operator representing a convolution operation).
Since the representative filter fi(x-x′,y-y′) depends on only a relative distance, this convolution operation can be executed similarly to convolution of a normal filter. The convolution operation may be performed in a real space or a frequency space using equation (2):
F−1[F(fi)F(Ji)] (2)
where F[ ] is the Fourier transform and F−1[ ] is the inverse Fourier transform.
In step S704, the composition unit 306 performs K(x′,y′)=K(x′,y′)+{result of the convolution operation between Ji(x′,y′) and fi(x-x′,y-y′)} for all x's and y's which satisfy 0≦x′≦X−1 and 0≦y′≦Y−1.
In step S705, the end determination unit 307 determines whether the value of the variable i becomes equal to or larger than N. If the value of the variable i≧N as a result of the determination, the process advances to step S707; if the value of the variable i<N, to step S706.
In step S706, the end determination unit 307 increments the value of the variable i by one, and operates the weight coefficient multiplication unit 304, convolution operation unit 305, and composition unit 306. After that, the process returns to step S702.
In step S707, the end determination unit 307 outputs the array K as an image having undergone filter processing via the terminal 308.
<Processing by Importance Calculation Unit 103>
A method of calculating an importance by the importance calculation unit 103 will be explained. The importance calculation unit 103 gives a higher importance to a filter having a larger coefficient value. For example, when the sum of squares of the value of each coefficient in a filter is larger, a higher importance is assigned to this filter. As a matter of course, an element which determines the degree of importance is not limited to this, and other elements are also conceivable. For example, a higher importance may be assigned to a filter having a larger size. In this way, various elements determine the degree of importance, and only one or a plurality of elements may be used.
As representative filters, a predetermined number of filters may be selected from filters for respective pixels in descending order of importance, as described above, but representative filters can be selected by another method. For example, an image is divided into N areas (area size and shape are not particularly limited). Then, a filter of highest importance among filters corresponding to respective pixels in the jth (1≦j≦N) area is selected as a representative filter for the jth area. This processing is performed for all js which satisfy 1≦j≦N. Accordingly, representative filters for the respective areas can be decided and thus N representative filters can be decided.
In
With this setting, the pixel positions of filters selected as representative filters are appropriately scattered within the area, and filters large in both the absolute value of the coefficient and size can be extracted. However, the method is not limited to the above one as long as representative filters can properly approximate filters different between pixels by the linear sum of a plurality of representative filters.
Details of the importance calculation unit 103 have been described.
<Processing by Weight Coefficient Calculation Unit 200>
Processing by the weight coefficient calculation unit 200 to calculate the weight vector w will be explained in more detail. A filter corresponding to the coordinate position (x′,y′) can be expressed as F(x-x′,y-y′,x′,y′), and the representative filter fi can be expressed as fi(x-x′,y-y′).
In the embodiment, a weight value is decided to minimize the following evaluation value V:
By minimizing the sum of squares of a difference from the filter F(x-x′,y-y′,x′,y′), a weight value can be decided while reflecting the characteristic of the filter F(x-x′,y-y′,x′,y′). To decide wi(x′,y′) to minimize V, V is partially differentiated by wi(x′,y′) and set as 0:
Rewriting equation (4) yields
Equation (5) can be expressed as Aw=b, similar to equation (1), by converting variables in equation (5), like equations (6):
By solving these equations, the weight vector w can be obtained. If there is no inverse matrix of A, the pseudo-inverse matrix of A is applied. The correlation vector calculation unit 203 calculates the correlation vector b in equations (6), and the correlation matrix calculation unit 204 calculates the correlation matrix A in equations (6).
In the above description, the evaluation value V is given as represented by equation (3) for simplicity. However, more commonly, the evaluation value V may be weighted for each pixel position, as represented by equation (7):
where a(x′,y′) is an appropriate weight coefficient. For example, it is considered that the filter needs to be reproduced more accurately for a higher-frequency area in an image. To do this, the value of the weight coefficient a(x′,y′) may be increased for a high-frequency image area. The method is not limited to the above one as long as the filter F(x-x′,y-y′,x′,y′) can be properly approximated by the linear sum of representative filters and weight coefficients.
<Processing by Filter Operation Unit 300>
Filter processing by the filter operation unit 300 will be described in more detail. The array K satisfies equation (8):
In the embodiment, the filter F(x-x′,y-y′,x′,y′) is approximated by expression (9):
From the above equation, the action of the filter F(x-x′,y-y′,x′,y′) on an input image can be approximated and expressed by equation (10):
The composition unit 306 achieves calculation of equation (10) by solving equation (11):
The above description is directed to processing using a filter set for each pixel. Processing using a filter set for each image area is as follows. First, an image is divided into R image areas, and a single filter is applied within the same region. At this time, a filter for the rth area can be represented by F(x-x′,y-y′,r) (1≦r≦R).
Then, the same processing as that described above is executed while replacing wi(x,y) with wi(r). Note that an index to calculate a sum changes from (x′,y′) to k.
The effect of the first embodiment will be explained with reference to
A memory capacity necessary in the embodiment and a calculation amount required when filters different between respective pixels act on an image will be examined. Here, the maximum filter size is S2 (pixel) and the number of pixels of an image is M2 (pixel).
A memory necessary to hold all filters different between respective pixels is O((MS)2) in where O is order. In the embodiment, a memory necessary to hold representative filters is O(NS2), a memory necessary to hold weight values is O(NM2), and thus a memory of O(N(M2+S2)) is needed in total.
Since the image size is generally larger than the filter size, S2<M2 and a necessary memory capacity is O(NM2). Thus, as the number N of representative filters is smaller than the number S2 of filter elements, the memory reduction effect of the embodiment becomes more significant. For example, for N=9 and S=21 in the example of
A calculation amount required when filters act on an image will be considered. In the conventional method, filters act on respective pixels and the calculation amount is O((MS)2). According to the embodiment, O(NM2) is required to calculate Ji(x′,y′) for the number N of representative filters in equation (11). In the embodiment, the operation of a filter on an image can be rewritten into convolution, so a calculation amount of O(NMlogM) is necessary for calculation of equation (11) in the use of FFT. If the image is sufficiently large, O(NM2)>>O(NMlogM) and a calculation amount of O(NM2) is necessary for a filter to act on the image. Hence, as the number N of representative filters is smaller than the number S2 of filter elements, the calculation amount reduction effect of the embodiment stands out much more. For example, for N=9 and S=21 in the example of
As described above, according to the first embodiment, filters different between pixels (areas) are approximated by linear coupling of representative filters and weight coefficients. A memory for storing filters and the amount of operation on an image can be reduced.
Note that the embodiment has exemplified a shake filter as a filter. However, the present invention is applicable to a filter for any purpose such as an optical blur or out-of-focusing corresponding to the object distance.
Second EmbodimentIn the first embodiment, when acquiring representative filters, importances are assigned to filters and filters of high importance are acquired as representative filters. The second embodiment will describe a case in which representative filters are generated by linear coupling of filters using principal component analysis.
An image processing apparatus according to the second embodiment is different from that in the first embodiment only in that it has an arrangement shown in
The arrangement shown in
A representative filter acquisition unit 1200 includes a correlation matrix calculation unit 1201, a diagonalization processing unit 1202, a generation unit 1203, and terminals 101, 102, and 105.
The correlation matrix calculation unit 1201 reads out filters for respective pixels that are stored in a filter database 110, and calculates the correlation matrix between the filters. The diagonalization processing unit 1202 obtains the eigenvectors and eigenvalues of the correlation matrix calculated by the correlation matrix calculation unit 1201. Note that the eigenvectors are set to be orthogonal to each other (since the correlation matrix is a symmetric matrix, eigenvectors can be set to be orthogonal to each other). The generation unit 1203 first acquires the number N of representative filters via the terminal 102. Then, the generation unit 1203 linearly couples eigenvectors and filters in descending order of the eigenvalue, generating representative filters.
Next, processing by the representative filter acquisition unit 1200 will be described with reference to
In step S1301, the correlation matrix calculation unit 1201 reads out, via the terminal 101, filters for respective pixels that are stored in the filter database 110. The correlation matrix calculation unit 1201 then calculates correlation matrices between the readout filters. The correlation matrix may be a correlation matrix between different filters or one between identical filters.
In step S1302, the diagonalization processing unit 1202 diagonalizes each correlation matrix calculated in step S1301, obtaining eigenvalues and eigenvectors. The diagonalization processing unit 1202 assigns indices of 1, 2, 3, . . . to eigenvalues in descending order of the value.
In step S1303, the generation unit 1203 initializes a variable i used below to 1. In step S1304, the generation unit 1203 calculates linear coupling of filters and eigenvectors corresponding to eigenvalues assigned with indices of the value of the variable i, obtaining the linear coupling result as the ith representative filter. Details of the processing in this step will be described later.
In step S1305, the generation unit 1203 registers the obtained ith representative filter in a representative filter database 120 via the terminal 105. In step S1307, the generation unit 1203 determines whether the value of the variable i is equal to or larger than the number N of representative filters. If the value of the variable i≧N as a result of the determination, the process ends; if the value of the variable i<N, advances to step S1306. In step S1306, the generation unit 1203 increments the value of the variable i by one, and performs the processes in steps S1304 and subsequent steps.
Details of the processing in step S1304 will be explained. The representative filter acquisition unit 1200 generates a representative filter using principal component analysis. The representative filter is generated by linear coupling of filters. The ith representative filter fi(x,y) is generated in accordance with equation (12):
where vi is a weight for the ith representative filter fi. Although vi is a two-dimensional array, it can be handled as a vector by rearranging vi(x′,y′) in line from upper left to lower right.
The correlation matrix calculation unit 1201 calculates a correlation matrix by solving equation (13):
CF also has four suffixes, but can be handled as a matrix by the same rearrangement as that of vi.
The diagonalization processing unit 1202 diagonalizes the correlation matrix CF, obtaining the weights vi as eigenvectors. The generation unit 1203 extracts eigenvectors sequentially in descending order of the eigenvalue, and generates a representative filter in accordance with equation (12).
In the first embodiment, a filter of high importance is acquired directly as a representative filter from filters. To the contrary, in the second embodiment, a representative filter with high contribution is generated by linear coupling of filters using principal component analysis. Hence, a filter can be approximated more accurately by a smaller number of representative filters than those in the first embodiment.
Third EmbodimentThe first embodiment has described a method of decomposing filters different between pixel positions (areas) into representative filters and weight coefficients, and approximating filters. The third embodiment will describe filter decomposition when the filter changes depending on the pixel position (area) and pixel value. Only a difference from the first embodiment will be described below.
In the third embodiment, Q filters are registered in a filter database 110 for respective pixel positions. For example, as shown in
The basic concept is the same as those in the first and second embodiments, and a filter is approximated by linear coupling of representative filters and weight coefficients. More specifically, filter F(x-x′,y-y′,x′,y′,α) is approximated according to expression (14):
Unlike the first and second embodiments, the weight vector w depends on α. The remaining operation is the same as that in the first embodiment. A correlation vector calculation unit 203 in a weight coefficient calculation unit 200 calculates a coefficient value by solving equation (15):
The remaining operation is the same as that in the first embodiment.
A weight coefficient multiplication unit 304 in a filter operation unit 300 calculates Ji(x′,y′)=wi(x′,y′,α)×I(x′,y′). At this time, α is determined from the pixel value I(x′,y′). The remaining operation is the same as that in the first embodiment.
A representative filter acquisition unit 1200 generates a representative filter based on equation (16):
Although vi is a three-dimensional array, it is converted into a two-dimensional array vi(X,α) by rearranging vi(x′,y′,a) in line from upper left to lower right while fixing α. Further, vi(X,α) is rearranged in line from upper left to lower right, and vi can therefore be handled as a vector.
A correlation matrix calculation unit 1201 calculates a correlation matrix by solving equation (17):
CF has six suffixes, but can be handled as a matrix by the same rearrangement as that of vi. The remaining operation is the same as that in the second embodiment.
As described above, according to the third embodiment, filters different between pixel positions (areas) and pixel values are approximated by linear coupling of representative filters and weight coefficients. A memory for storing filters and the amount of operation on an image can be reduced.
Fourth EmbodimentThe respective units shown in
Then, the CPU of the computer executes processing using the installed computer program, data, and various databases registered in the large-capacity information storage device, implementing the processes described in the above embodiments.
Note that the above-described embodiments may be properly combined or switched and used. The arrangements of the above-described embodiments may be properly changed as long as the purpose of the invention can be achieved, and the above arrangements are merely examples.
Other EmbodimentsAspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2010-283729 filed Dec. 20, 2010 which is hereby incorporated by reference herein in its entirety.
Claims
1. An image processing apparatus comprising:
- a holding unit that holds a plurality of representative filters;
- a unit that acquires, for respective pixels which form an image, weight vectors containing weight values for the respective representative filters as components;
- a unit that causes the respective representative filters to act on the respective pixels which form the image; and
- a unit that weights results of the action with the weight vectors and adds the results.
2. The apparatus according to claim 1, further comprising:
- a unit that holds original filters to be applied to the pixels for the respective pixels which form the image; and
- a selection unit that assigns a higher importance to a filter formed from a larger coefficient value among the original filters and selects, as representative filters, a predetermined number of filters from the original filters in descending order of importance,
- wherein said holding unit holds the representative filters selected by said selection unit.
3. The apparatus according to claim 2, further comprising: correlations between the original filters to be applied to the pixels and the selected representative filters; and
- a unit that acquires, as first correlations for the respective pixels which form the image,
- a unit that acquires correlations between the selected representative filters as second correlations,
- wherein said unit for acquiring weight vectors acquires the weight vectors based on the first correlations and the second correlations.
4. The apparatus according to claim 3, wherein said unit for acquiring weight vectors acquires, as the weight vectors for the respective pixels which form the image, vectors each of which is to be multiplied by a correlation matrix indicating the correlations between the selected representative filters in order to obtain correlation vectors indicating the first correlations and contains weight values for the selected representative filters as components.
5. The apparatus according to claim 1, further comprising:
- a unit that holds original filters to be applied to the pixels for the respective pixels which form the image; and
- a selection unit that assigns a higher importance to a filter of a larger size among the original filters and selects, as representative filters, a predetermined number of filters from the original filters in descending order of importance,
- wherein said holding unit holds the representative filters selected by said selection unit.
6. The apparatus according to claim 2, wherein an original filter to be applied to a pixel of interest is a filter formed from coefficients by which pixels including the pixel of interest and peripheral pixels around the pixel of interest are multiplied.
7. The apparatus according to claim 2, wherein an original filter to be applied to a pixel of interest is a filter corresponding to a pixel value of the pixel of interest.
8. An image processing method to be performed by an image processing apparatus which holds a plurality of representative filters, comprising the steps of:
- acquiring, for respective pixels which form an image, weight vectors containing weight values for the respective representative filters as components;
- causing the respective representative filters to act on the respective pixels which form the image; and
- weighting results of the action with the weight vectors and adding the results.
9. A non-transitory computer-readable storage medium storing a computer program for causing a computer to function as each units of an image processing apparatus defined in claim 1.
Type: Application
Filed: Dec 7, 2011
Publication Date: Jun 21, 2012
Patent Grant number: 8774545
Applicant: CANON KABUSHIKI KAISHA (Tokyo)
Inventor: Tomohiro Nishiyama (Tokyo)
Application Number: 13/313,524