Method for reducing motion blur in a digital image
A method for reducing motion blur in a digital image is disclosed. An embodiment of the method comprises increasing the magnitude of the amplitude of the signals in a preselected set of spatial frequencies of the image in the direction of the motion blur.
This application is a continuation application of Ser. No. 09/867,352 of Charles H. McConica for METHOD FOR REDUCING MOTION BLUR IN A DIGITAL IMAGE filed on May 29, 2001, which is hereby incorporated for all that is disclosed therein.
METHOD FOR REDUCING MOTION BLUR IN A DIGITAL IMAGE TECHNICAL FIELD OF THE INVENTIONThe present invention relates to reducing motion blur in a digital image and, more particularly, to analyzing image data representative of a digital image to determine the amount and direction of motion blur and processing the image data to reduce the motion blur.
BACKGROUND OF THE INVENTIONDigital still cameras generate image data representative of an image of an object. The process of generating image data representative of an image of an object is often referred to simply as “imaging” or “capturing” the object. The image data is processed and output to a device that displays a replicated image of the object. For example, the replicated image of the object may be displayed on a video monitor or printed by a printer.
The digital camera focuses the image of the object onto a two-dimensional array of photodetecting elements. The photodetecting elements are relatively small and each one generates image data representative of a very small portion of the image of the object. For example, the two-dimensional array may have several million photodetecting elements that each generate image data representative of a small portion of the image of the object. The image data generated by the individual photodetecting elements is processed to recreate the image of the object. One type of photodetecting element is a charge-coupled device that outputs a voltage that is proportional to the amount of light it receives over a preselected period.
In order to improve the image generated by a digital camera, the density of photodetecting elements on the two-dimensional array is increased. The increased density of photodetecting elements increases the number of photodetecting elements that image an object, which in turn improves the quality of the image by reducing spaces between imaged portions of the object. Another method of improving the image generated by a digital camera, especially in low light conditions, is by using a long period to generate the image data. This long period of image generation is achieved by having the photodetecting elements detect light for an extended period.
One problem with photography, including digital photography, is that the image generated by the camera will be blurred if the camera moves as the photodetecting elements generate image data. For example, under ideal conditions of no movement in the digital camera relative to the object being imaged, each photodetecting element generates image data representative of a particular portion of the image of the object. If, however, the camera is moved as the image data is generated, the individual portions of the image of the object will be imaged by several photodetecting elements. Accordingly, each photodetecting element images several different portions of the image of the object, which causes the replicated image of the object to be blurred. This type of blur is referred to as motion blur.
The motion blur problem is exacerbated as the above-described imaging period is extended. The extended imaging period results in a higher probability that the camera will be moved during the imaging period, which results in a higher probability that motion blur will occur in the replicated image. Accordingly, the benefit of an extended imaging period may be offset by a higher probability of generating a blurred image. The problem of motion blur is further exacerbated by the use of more photodetecting elements to generate an image. The photodetecting elements will be smaller and receive less light. Accordingly, the exposure time of the camera must be extended in order for the smaller photodetecting elements to receive enough light to generate accurate image data.
Therefore, a need exists for a method and device for detecting and reducing motion blur in a digital image.
BRIEF DESCRIPTION OF THE DRAWINGS
A method and apparatus for detecting and reducing motion blur in an image is generally described below followed by a more detailed description.
Having generally described the digital camera 100 and a method for reducing blur in a digital image, they will now be described in greater detail. The following description focuses on the operation of the camera 100 followed by detection and reduction of motion blur in images generated by the camera 100.
A schematic illustration of a digital camera 100 generating image data representative of an object 110 is illustrated in
The camera 100 may have a housing 120 with an aperture 122 formed therein. A lens 126 or a plurality of lenses may be located within or adjacent the aperture 122 and may serve to focus an image of the object 110 onto components located within the camera as is described below. The lens 126 may, as a non-limiting example, have a focal length of approximately seven millimeters. A two-dimensional photosensor array 130 and a processor 132 may also be located within the housing 120. As illustrated in
A front, enlarged view of the two-dimensional photosensor array 130 is illustrated in
The photodetecting elements 138 may be charge-coupled devices that develop a charge that is proportional to the amount of light they receive during a preselected period. The preselected period is dependent on the intensity of light to which the object 110,
Referring to
Referring again to
The processor 132 may also be electrically connected to a peripheral viewing device, not shown, such as a computer monitor or a printer that serves to display and/or process the image data generated by the camera 100. The processor 132 may also be electrically connected to a peripheral computer, not shown, that stores and/or processes the image data generated by the two-dimensional photosensor array 130 and, thus, the camera 100. It should be noted that the processing techniques and methods described herein with reference to the processor 132 may be performed by the peripheral viewing device or computer. For example, image data generated by the two-dimensional photosensor array 130 may be transmitted directly to a peripheral processor for processing.
Having described the components of the camera 100 that are essential for the correction of motion blur, the operation of the camera 100 will now be described followed by a description of the camera 100 correcting for motion blur.
The camera 100 is typically held by a user and used to generate image data representative of an object, such as the object 110. The image data typically represents a still picture of the object 110, however, the image data may represent a moving picture, e.g., a motion picture, of the object 110. Light 150 reflects from the object 110 and enters the housing 120 of the camera 100 via the aperture 122. The lens 126 then focuses an image of the object 110 onto the two-dimensional photosensor array 130. The two-dimensional photosensor array 130 generates image data representative of the image of the object 110, which is output to the processor 132 for processing. As described above, in the embodiment of the camera 100 described herein, the lens 126 blurs the image focused onto the two-dimensional photosensor array 130 by two photodetecting elements 138,
Referring to
The color filter array 143 provides a reference for directions used herein, including the directions of motion blur. A y-direction Y extends perpendicular to the x-direction X that was described above. An a-direction A extends diagonally to the x-direction X and the y-direction Y. A b-direction B extends perpendicular to the a-direction A. The aforementioned directions provide non-limiting examples of blur motion and correction as will be described in greater detail below.
Referring again to
Motion blur is typically attributed to either rotational or translational movement of the camera 100 relative to the object 110 occurring as the object 110 is being imaged. The motion blur correction methods described herein are directed toward correcting motion blur caused by translational motion between the object 110 and the camera 100 as the object 100 is being imaged. It should be noted, however, that motion blur caused by rotational motion may also be corrected by the methods described herein. For example, if the center of relative motion between the camera 100 and the object 110 is a significant distance from the image area, the motion blur associated with the image may be substantially similar to motion blur caused by translational motion. One test to determine if motion blur attributed to rotational motion can be corrected as though it is attributed to translational motion is my measuring the blur on the image closest to the center of rotation and furthest from the center of rotation. If the two blurs are substantially equivalent, the motion blur may be corrected as though it is the result of translational motion. In one embodiment of the methods described herein, the blurs at the edges of the image are measured to determine if they are substantially equivalent, meaning that the blur is the result of translational motion.
Having summarily described the operation of the camera 100, the operation of the camera 100 will now be described generating image data that is blurred. In the example illustrated herein, the camera 100 has moved in the x-direction X as the image data was being generated. The movement in the x-direction X is an amount that causes the image 156 to move a distance of two and one-half photodetecting elements 138.
The movement of the camera 100 relative to the object 110 as the two-dimensional photosensor array 130 generates image data results in the image of the object 110 being blurred. More specifically, when the image data is processed using conventional processing methods, the replicated image of the object 110 will be blurred. An example of this blurring is illustrated in
As described in greater detail below, image data is analyzed to determine the direction and amplitude of the blur. The amplitude is the amount of movement that occurred during the generation of the image data. The direction of motion blur as described herein is, for illustration purposes only, limited to the x-direction X, the y-direction Y, the a-direction A and the b-direction B. It should be noted that the motion blur detection and correction methods described herein are applicable to a plurality of amplitudes and directions.
Referring again to
The processor 132 may store a numeric value corresponding to the image data values, or simply image data, generated by each of the photodetecting elements 138. The image data generated by each of the photodetecting elements 138 is proportional to the intensity of light received by each of the photodetecting elements 138. The processor 132 may also store the location from where each value of the image data was generated on the two-dimensional photosensor array 130.
Having described the generation of image data, the detection and reduction of motion blur will now be described. The following procedure is outlined in the flowchart of
Image data generated by the two-dimensional photosensor array 130 is representative of the amplitude of light at a plurality of spatial locations within the area of the image. In the situation where the image data is representative of a color image, the image data is representative of the amplitude of light at a plurality of spatial locations for a plurality of color planes. For example, the image data may be represented in three color planes, a red plane, a green plane, and a blue plane. The image data of each color plane may be transformed to a frequency domain by application of a Fourier transform and is sometimes referred to as the “transformed” image data. The transformed image data represents an amplitude and a phase for each of a set of spatial frequencies.
The transformed image data offers a perspective on the image data relating to the amplitude and direction of blur within the image. The transformed image data may be manipulated as described below to reduce motion blur. The image data may then be retransformed by way of an inverse Fourier transform to its original format for presentation to a user. The motion blur in the retransformed image data has been reduced by the manipulation.
Motion blur reduces the amplitude of the transformed image data, especially in the higher spatial frequencies and always in the direction of the blur motion. When the image data is transformed into the frequency domain using a Fourier transform or the like, the magnitude of the amplitude of sinusoidal signals throughout the set of spatial frequencies in a direction perpendicular to the direction of the motion remains virtually unaffected. The same occurs with regard to directions that are close to perpendicular to the direction of the motion.
Having summarized motion blur, its detection and minimization will now be described in greater detail.
An example of the spatial affects of motion blur are shown in the graphs of
The graph of
It should be noted that many digital cameras have various types of blur filters, such as a blurry lens, a birefringent filter or lens, or other blur filter device. The blur filter is to prevent objectionable aliased signal artifacts in the image. Aliased signal artifacts are typically associated with large areas of moderately high spatial frequency repetitive patters, such as the weave of a shirt.
The graph of
As will be described in greater detail below, motion blur in the replicated image is reduced by increasing the magnitude of the amplitude of the sinusoidal signals at specific spatial frequencies of the frequency domain transformed image in the direction of the motion blur. It should be noted that the sinusoidal signals are used herein for illustration purposes and that they are derived by way of a Fourier transform. It should also be noted that the inventive concepts described herein are applicable to other signals derived from other transform functions.
Different sharpening kernels may be applied to the image data in the direction of the motion blur to increase the magnitude of the amplitude of the high spatial frequency content. In one non-limiting example, a Weiner correction algorithm is applied to the frequency domain transformed image data in the direction of the motion blur to increase the magnitude of the amplitude of a specific set of spatial frequency content. It should be noted that other correction algorithms may be applied to the image data to reduce motion blur.
Having summarily described the effects of motion blur and a method of minimizing the effects of motion blur, the detection of motion blur will now be described.
As summarily described above, motion blur due to translational motion, unlike other blurs, only occurs in a single direction. For example, with reference to the motion blur illustrated by the images 156 and 158 of
The detection of motion blur may be accomplished by analyzing image data generated by photodetecting elements 138 located in a portion of the two-dimensional photosensor array 130. A user of a camera typically assures that the subject of the photograph is focused and substantially centered in the photograph. Accordingly, the center of the photograph is where the sharpest focused image is located. For this reason, in the examples described herein, image data generated by a central portion of the two-dimensional photosensor array 130 will be analyzed for motion blur. For example, image data generated by the middle one-ninth of the two-dimensional photosensor array 130 may be analyzed for motion blur. It should be understood, however, that any portion of the two-dimensional photosensor array 130 may be analyzed for motion blur.
For illustration purposes, an example of analyzing image data to determine the direction and magnitude of motion blur is provided. The analysis commences analyzing image data generated by the photodetecting elements 138 located in the vicinity of a corner 160 of the image 156.
The first step in determining whether motion blur exists in the image data is to analyze the magnitude of the amplitude of the high spatial frequency content in orthogonal directions over an area of an image. The magnitude of the amplitude of the high spatial frequency content in the x-direction X and the y-direction Y is proportional to the figure of merit in the x-direction X and the y-direction Y respectively. The figure of merit provides a basis for determining the degree to which light/dark transitions occur in a specific direction in an image. A non-limiting example of a figure of merit in the x-direction X is referred to herein as Fx and is calculated as follows:
-
- wherein p is the number of photodetecting elements 138 in the x-direction X that are to be analyzed, q is the number of rows that are analyzed in the x-direction X, and X(n,m) is the value of the image data generated by the photodetecting element 138 at that location. Accordingly, n designates the column number of the photodetector array and m designates the row number. It should be noted that in the non-limiting example described herein, the figure of merit FX may calculated in the center of the image where the sharpest focus occurs. It should also be noted that the figures of merit are calculated in a single color plane and not by adjacent photodetecting elements for the example of the digital camera described herein. This assures that the figure of merit measures motion blur and not color transitions in the image. Furthermore, by not analyzing adjacent photodetecting elements, the anti-aliasing blur filter will have little, if any, influence on the figure of merit calculation. It should be noted that virtually any increment of pixel values may be used to calculate the figure of merit. A preferred embodiment will use photodetecting elements that are close together within one color plane, because this will emphasize the higher spatial frequencies. The higher spatial frequencies are the most sensitive measures of motion blur. It should also be noted that the figures of merit may be calculated in any direction to measure motion blur in any direction.
As with the figure of merit in the x-direction X, the figure of merit in the y-direction Y is referred to as FY and is calculated as follows:
-
- wherein q is the number of photodetecting elements 138 in the y-direction Y that are to be analyzed, p is the number of columns in the y-direction Y, and Y(n,m) is the value of the image data generated by the photodetecting element 138 at that location. It should be noted that in the non-limiting example described herein, the figure of merit FY is calculated along the column 166. In one preferred embodiment, p is equal to q so that the figures of merit are calculated from a square section of the image.
As described above, the figures of merits described herein are non-limiting examples of figures of merit. Other variations of the figure of merit can use weighted multiple color planes. For example, the figures of merit may be calculated in all the color planes. The values from the different color planes can be weighted and summed togther. For example, each green plane may be weighted thirty percent, the red plane may be weighted thirty percent and the blue plane may be weighted ten percent. Likewise, the spatial frequencies may be weighted. It is also possible to combine weighted color planes with weighted spatial frequencies. In another variation, higher order numerical methods may be used to generate the slope estimates. These slope estimate may be used rather than the simple difference calculations described above.
The figures of merit provide indications of the magnitudes of the amplitudes of spatial frequencies of the image in the specified directions. Referring briefly to
As described above, motion blur due to translational motion, unlike other blurs, occurs only in one direction. Accordingly, if the figure of merit in the x-direction X, FX, differs from the figure of merit in the y-direction Y, FY, motion blur likely occurred during the generation of the image data. This difference in the figures of merit may be calculated by taking the ratio of FX to FY or FY to FX and comparing the ratio to a preselected value. As a non-limiting example, if the ratio is greater than 1.4, it can be assumed that motion blur occurred. It follows that the direction of the motion blur will be in the direction having the lower figure of merit because the magnitudes of the transitions in this direction are lower. It also follows that the value of the ratio determines the amount of motion blur. For example, a greater value of the ratio means that the image was blurred over a greater number of photodetecting elements 138,
When a determination has been made that the image has been blurred or, more specifically, that the replicated image is blurred, the image data is processed to minimize the blur of the replicated image. Minimizing motion blur involves increasing the magnitude of the amplitude of sinusoidal signals at specific spatial frequencies of the frequency domain transformed image data in the direction of the blur. A non-limiting example may be achieved by amplifying image data values corresponding to higher spatial frequencies. For example, in the case where the motion blur occurs in the x-direction X, the image data generated by individual rows 140,
It should be noted that the effects of motion blur occur to all the color planes equally. Therefore, image data from one color plane may be analyzed to determine the direction and amount of motion blur. The magnitude of the amplitude of sinusoidal signals at specific spatial frequencies of the frequency domain transformed image of the color planes may be increased equally in order to minimize the affects of motion blur on the whole composite image. It should be further noted that processing the image data to minimize the motion blur may be performed during the process of demosaicing the image data as disclosed in the United States patent application of Taubman, previously referenced. This may, as an example, involve scaling demosaicing coefficients in the direction of the motion blur to increase the magnitude of the amplitude of sinusoidal signals at specific spatial frequencies of the frequency domain transformed image in the direction of the motion blur. Other techniques can be used to reduce motion blur, such as directional sharpening kernels, windowed directional sharpening kernels, and directional deconvolution correction.
The procedure described above determines the amount and direction of motion blur. It should be noted that the procedure described above is applicable for detecting and minimizing motion blur along any orthogonal directions. For illustration purposes, an example of detecting and minimizing motion blur in additional directions diagonal to the x-direction X and the y-direction Y is provided. The diagonal directions are referenced in
Having described the method for determining whether motion blur is present in an image and a method of minimizing the motion blur, a detailed example of reducing motion blur is now provided.
The following example is based on the imaged point of light of
As shown by the data of Table 1, motion blur attenuated the high spatial frequency content of the transformed image. More specifically, the values of the high spatial frequencies of five, six, seven, and eight are attenuated. Motion blur did not attenuate the values of the lower spatial frequencies.
Various image sharpening kernels can be applied to the image data in the direction of the motion blur in order to increase the magnitude of the amplitude of the high spatial frequency content. The result of the increased amplitude of the high spatial frequency content is a sharper image, which is a reduction of the blur caused by motion. More specifically, the transitions between light and dark areas are accentuated.
Table 1 shows the Weiner correction factor that is applied to the signal at the specific spatial frequencies in the direction of the motion blur so as to reduce motion blur. It should be noted that the Weiner correction factor used herein has a minimal impact on the high spatial frequency content. As shown in Table 1, the frequencies seven and eight are minimally impacted by the Weiner correction factor applied herein. This is because the very high spatial frequency content of the image data tends to include a lot of noise. Thus, if the very high spatial frequency content is amplified, the noise will likely be amplified more than the signal.
The resulting image after the application of the Weiner correction factor is shown in
As illustrated by Table 2, the Weiner corrected image restores the image to a close approximation to the image represented by the blur filter column. It should be noted that the value of the highest amplitude at the spatial location nine ideally would be restored to a value of four, however, the Weiner correction factor as described herein restored it to a value of 3.37 from its motion blurred value of 2.8. It should also be noted that the Weiner correction factor illustrated herein has very few negative numbers, which have very small magnitudes. High magnitude negative numbers tend to increase the Gibbs effect, which is detrimental to image quality. The −1, 3, −1 sharpening kernel did increase the value of the highest spatial frequency. However, the value of 4.4 is much greater than the ideal value of 4.0, which increases the noise and is actually detrimental to the image quality.
As described above, different sharpening algorithms can be applied to the image data in the direction of the motion blur in order to reduce the motion blur. In Table 2, two alternative examples of sharpening algorithms or kernels are provided. More specifically, the results of a −1, 3, −1 and a −0.5, 2, −0.5 sharpening kernels are shown in Table 2.
Having described an embodiment of the motion blur detection and minimization methods, other embodiments will now be described.
The method described herein corrects for two pixels of motion blur. It is to be understood, however, that the method may be modified to correct for different amounts of motion blur. In one embodiment, a determination is made as to the amount of motion blur in the image. The image is then ‘deblurred’ an amount based on the amount of blur in the image. For example, the method may be able to correct for two or four pixels of motion blur. If the image is determined to have greater than one and less than three pixels of motion blur, the image may be corrected for two pixels of motion blur. If the image is determined to have three to five pixels of motion blur, the image is corrected for four pixels of motion blur.
The detection of motion blur has been described herein as being accomplished by analyzing image data representative of an object. Motion blur may also be detected by using a sensor or plurality of sensors mounted to the camera 100,
Referring to
Referring again to
In another embodiment, small portions of the image are analyzed to determine if motion blur exists in a small portion of the image. The motion blur is then minimized in that small section. For example, the camera may be held stationary relative to a background as image data is being generated. An object being imaged, however, may be moving relative to the camera and the background as image data is being generated. The method described above, will reduce the motion blur of the image of the object without creating blur in the background.
The above described method of minimizing motion blur becomes more useful as the density of photodetecting elements located on a two-dimensional photosensor array increases. The purpose of having an increased density of photodetecting elements is to provide a more defined or sharper image. The susceptibility to motion blur, however, increases as the density of photodetecting elements increases. During the period that the photodetecting elements are generating image data, the image of the object is more likely to shift between photodetecting elements when the density of photodetecting elements is high. This shifting will be detected as motion blur and will be minimized accordingly.
The above described method of minimizing motion blur also becomes more useful in generating image data in relatively dark environments. The period required by the photodetecting elements to generate image data is proportional to the amount of light reflecting from the object being imaged. Accordingly, when imaging is performed in a relatively dark environment, the period required to generate the image data increases. This increased period increases the likelihood that the user of the camera will move the camera while image data is being generated, which will cause motion blur. By implementing the above-described procedure, the motion blur can be minimized.
Other embodiments for determining and minimizing motion blur may be employed using the methods described herein. In one embodiment, several color planes are analyzed to determine the motion blur. These include the green, red, and blue color planes. In another embodiment, more than four directions are analyzed for the presence of motion blur. For example, eight directions may be analyzed, which offers a more precise determination of the direction of motion blur. In yet another embodiment, the image data is analyzed to determine if different amounts of motion blur are present. For example, the image data may be analyzed to determine if no motion blur exists, two pixel motion blur exists, four pixel motion blur exists and so on.
While an illustrative and presently preferred embodiment of the invention has been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed and that the appended claims are intended to be construed to include such variations except insofar as limited by the prior art.
Claims
1. A method for reducing motion blur in a digital image, said method comprising increasing the magnitude of the amplitude of the signals in a preselected set of spatial frequencies of the image in the direction of the motion blur.
2. The method of claim 1, wherein said signals are sinusoidal signals and wherein said spatial frequencies are derived via a Fourier transform.
3. The method of claim 1 and further comprising determining the direction of motion blur.
4. The method of claim 3, wherein said determining the direction of motion blur comprises:
- providing image data representative of at least a portion of said digital image;
- analyzing said image data to calculate a first figure of merit of said digital image in a first direction;
- analyzing said image data to calculate a second figure of merit of said digital image in a second direction, said first and said second directions being substantially orthogonal;
- calculating a first ratio of said first figure of merit to said second figure of merit, said ratio being the greater of said first or second figure of merit divided by the lesser of said first or second figure of merit; and
- comparing said first ratio to a preselected value, wherein motion blur exists in said digital image if said first ratio is greater than said preselected value.
5. The method of claim 3, wherein said determining the direction of motion blur comprises:
- providing image data representative of at least a portion of said digital image;
- analyzing said image data to calculate a plurality of first figures of merit of said digital image in a plurality of directions;
- analyzing said image data to calculate a plurality of second figures of merit of said digital image, wherein each of said second figures of merit is in a direction substantially orthogonal to a corresponding first figure of merit;
- calculating a plurality of ratios of said first figures of merit to their corresponding second figures of merit, each of said ratios being the greater of a first or second figure of merit divided by the lesser of its corresponding first or second figure of merit; and
- comparing said ratios to a preselected value, wherein motion blur exists in said digital image if one of said ratios is greater than said preselected value.
6. An apparatus for reducing motion blur in an image, said apparatus comprising a computer and a computer-readable medium operatively associated with said computer, said computer-readable medium containing instructions for controlling said computer to reduce motion blur in an image by:
- determining the direction of motion blur by analyzing image data representative of at least a portion of said image; and
- increasing the magnitude of the amplitude of spatial frequency in the direction of said motion blur.
7. The apparatus of claim 6, wherein said increasing the magnitude of the amplitude of spatial frequency comprises increasing the magnitude of the amplitude of the signals in a preselected set of spatial frequencies of the image in the direction of the motion blur.
8. The apparatus of claim 6, wherein said increasing the magnitude of the amplitude of spatial frequency comprises increasing the magnitude of the amplitude of the sinusoidal signals in a preselected set of spatial frequencies of the image in the direction of the motion blur, wherein said spatial frequencies are derived by way of a Fourier transform.
9. The apparatus of claim 8, wherein said image data is transformed back to the spatial domain by an inverse Fourier transform.
10. The apparatus of claim 6, wherein said determining the direction of motion blur comprises:
- analyzing said image data to calculate a first figure of merit of said digital image in a first direction;
- analyzing said image data to calculate a second figure of merit of said digital image in a second direction, said first and said second directions being substantially orthogonal;
- calculating a first ratio of said first figure of merit to said second figure of merit, said ratio being the greater of said first or second figure of merit divided by the lesser of said first or second figure of merit; and
- comparing said first ratio to a preselected value, wherein motion blur exists in said digital image if said first ratio is greater than said preselected value.
11. The apparatus of claim 6, wherein said determining the direction of motion blur comprises:
- analyzing said image data to calculate a plurality of first figures of merit of said digital image in a plurality of directions;
- analyzing said image data to calculate a plurality of second figures of merit of said digital image, wherein each of said second figures of merit is in a direction substantially orthogonal to a corresponding first figure of merit;
- calculating a plurality of ratios of said first figures of merit to their corresponding second figures of merit, each of said ratios being the greater of a first or second figure of merit divided by the lesser of its corresponding first or second figure of merit; and
- comparing said ratios to a preselected value, wherein motion blur exists in said digital image if one of said ratios is greater than said preselected value.
12. A method for reducing motion blur in an image, said method comprising:
- providing image data representative of at least a portion of said image;
- analyzing said image data to detect the presence of motion blur in said image;
- analyzing said image data to detect the direction of motion blur in said digital image;
- processing said image data to increase edge acuity said image in said direction of said motion blur.
13. The method of claim 12, wherein said analyzing said image data to detect the presence of motion blur comprises:
- analyzing said image data to calculate a first figure of merit of said digital image in a first direction;
- analyzing said image data to calculate a second figure of merit of said digital image in a second direction, said first and said second directions being substantially orthogonal;
- calculating a first ratio of said first figure of merit to said second figure of merit, said ratio being the greater of said first or said second figure of merit divided by the lesser of said first or said second figure of merit; and
- comparing said first ratio to a preselected value, wherein motion blur exists in said digital image if said first ratio is greater than said preselected value.
14. The method of claim 13, wherein said analyzing said image data to detect the direction of motion blur comprises determining the lowest value of said first and said second figures of merit, said lowest value corresponding to said direction of motion blur.
15. The method of claim 12, wherein said analyzing said image data to detect the presence of motion blur comprises:
- analyzing said image data to calculate a plurality of first figures of merit of said digital image in a plurality of directions;
- analyzing said image data to calculate a plurality of second figures of merit of said digital image, wherein each of said second figures of merit is in a direction substantially orthogonal to a corresponding first figure of merit;
- calculating a plurality of ratios of said first figures of merit to their corresponding second figures of merit, each of said ratios being the greater of a first or second figure of merit divided by the lesser of its corresponding first or second figure of merit; and
- comparing said ratios to a preselected value, wherein motion blur exists in said digital image if one of said ratios is greater than said preselected value.
16. The method of claim 15, wherein said analyzing said image data to detect the direction of motion blur comprises determining which of said ratios has the highest value and determining the lowest figure of merit of said highest valued ratio, said lowest figure of merit corresponding to said direction of motion blur.
17. The method of claim 12, wherein said processing comprises increasing the magnitude of the amplitude of the signals in a preselected set of spatial frequencies of the transformed image data in the direction of the motion blur.
18. The method of claim 12, wherein said processing comprises increasing the magnitude of the amplitude of the sinusoidal signals in a preselected set of spatial frequencies of the transformed image data in the direction of the motion blur, wherein said image data is transformed by a Fourier transform.
19. The method of claim 18, and further comprising transforming said image data back to the spatial domain by an inverse Fourier transform.
20. The method of claim 12, wherein said processing comprises increasing the amplitude of signals of said image based on the detection and amplitude of motion blur.
Type: Application
Filed: May 5, 2005
Publication Date: Nov 3, 2005
Inventor: Charles McConica (Corvallis, OR)
Application Number: 11/122,906