DYNAMIC COMPUTATION OF LENS SHADING

- BROADCOM CORPORATION

Embodiments of the present disclosure utilize captured image information to dynamically determine a lens shading surface being experienced by an imaging device or camera under current conditions. The lens shading surface is then used to apply a correction to the pixels of captured images to compensate for effects of lens shading.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority to copending U.S. provisional application entitled, “Image Capture Device Systems and Methods,” having Ser. No. 61/509,747, filed Jul. 20, 2011, which is entirely incorporated herein by reference.

TECHNICAL FIELD

The present disclosure is generally related to lens shading correction for imaging devices.

BACKGROUND

An increasing number of devices are being produced that are enabled to capture and display images. For example, mobile devices, such as cell phones, are increasingly being equipped with digital cameras to capture images, including still snapshots and motion video images.

One of the critical problems in small form factor cameras like the ones in cellular phones is the lens shading of the camera, where lens shading is the difference in light transition through the opto-electrical system of the camera, in a way that a same light source that is imaged by the camera at different angles, or places on the image is read by the camera in different values rather than having the same value.

As a result, lens shading can cause pixel cells in a pixel array of an image sensor located farther away from the center of the pixel array to have a lower pixel signal value when compared to pixel cells located closer to the center of the pixel array even when all pixel cells are exposed to the same illuminant condition. Moreover, pixels with different spectral characteristics have different responses to the lens shading, which may cause appearance of color patches even if the scene is monochromatic. In order to correct the lens shading, long and expensive calibration process is done per camera or mobile product. In many cases, calibration errors are the main source for the reduced image quality in these devices.

BRIEF DESCRIPTION OF THE DRAWINGS

Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.

FIG. 1 is a block diagram of an exemplary mobile device with image capture and processing capability in accordance with embodiments of the present disclosure.

FIG. 2 is a diagram representation of an intensity profile under uniform illumination in accordance with embodiments of the present disclosure.

FIG. 3 is a diagram representation of intensity profiles of multiple images under non-uniform illumination in accordance with embodiments of the present disclosure.

FIGS. 4-6 are flow chart diagrams depicting exemplary processes of estimating lens shading in accordance with the disclosed embodiments.

FIG. 7 is a diagram representation of a surface profile that may be created depending on the scene being photographed and the particular illumination characteristics in accordance with the disclosed embodiments.

FIGS. 8-11 are block diagrams illustrating examples of a mobile device employing the image processing circuitry of FIG. 1.

DETAILED DESCRIPTION

Embodiments of the present disclosure utilize captured image information to determine a lens shading surface being experienced by an imaging device or camera under current conditions (e.g., optical conditions, lighting conditions, etc.). The lens shading surface is then used to apply a correction to the pixels of captured images to compensate for effects of lens shading.

Embodiments of the present disclosure relate to image processing performed in devices. For example, embodiments include mobile devices where image processing must be performed with limited resources. Types of such mobile devices include mobile phones (e.g., cell phones), handheld computing devices (e.g., personal digital assistants (PDAs), BLACKBERRY devices, PALM devices, etc.), handheld music players (e.g., APPLE IPODs, MP3 players, etc.), and further types of mobile devices. Such mobile devices may include a camera or image sensor used to capture images, such as still images and video images. The captured images are processed internal to the mobile device.

FIG. 1 shows a block diagram of an exemplary mobile device 100 with image capture and processing capability. Mobile device 100 may be a mobile phone, a handheld computing device, a music player, etc. The implementation of mobile device 100 shown in FIG. 1 is provided for purposes of illustration, and is not intended to be limiting. Embodiments of the present disclosure are intended to cover mobile devices having additional and/or alternative features to those shown for mobile device 100 in FIG. 1.

As shown in FIG. 1, mobile device 100 includes, but is not limited to including, an image sensor device 102, an analog-to-digital (A/D) 104, an image processor 106, a speaker 108, a microphone 110, an audio codec 112, a central processing unit (CPU) 114, a radio frequency (RF) transceiver 116, an antenna 118, a display 120, a battery 122, a storage 124, and a keypad 126. These components are typically mounted to or contained in a housing. The housing may further contain a circuit board mounting integrated circuit chips and/or other electrical devices corresponding to these components. Each of these components of mobile device 100 is described as follows.

Battery 122 provides power to the components of mobile device 100 that require power. Battery 122 may be any type of battery, including one or more rechargeable and/or non-rechargeable batteries.

Keypad 126 is a user interface device that includes a plurality of keys enabling a user of mobile device 100 to enter data, commands, and/or to otherwise interact with mobile device 100. Mobile device 100 may include additional and/or alternative user interface devices to keypad 126, such as a touch pad, a roller ball, a stick, a click wheel, and/or voice recognition technology.

Image sensor device 102 is an image capturing device. For example, image sensor device 102 may include an array of photoelectric light sensors, such as a charge coupled device (CCD) or a CMOS (complementary metal-oxide-semiconductor) sensor device. Image sensor device 102 typically includes a two-dimensional array of sensor elements or pixel sensors organized into rows and columns. Each pixel sensor may be identified using pixel sensor coordinates, where “x” is a row number, and “y” is a column number, for any pixel sensor in the array of sensor elements. In embodiments, each pixel sensor of image sensor device 102 is configured to be sensitive to a specific color, or color range. In one example, three types of pixel sensors are present, including a first set of pixel sensors that are sensitive to the color red, a second set of pixel sensors or photo-detectors that are sensitive to green, and a third set of pixel sensors that are sensitive to blue. Image sensor device 102 receives light (from optical system 101) corresponding to an image, and generates an analog image signal corresponding to the captured image. Analog image signal includes analog values for each of the pixel sensors.

Optical system 101 can be a single lens, as shown, but may also be a set of lenses. An image of a scene is formed in visible optical radiation through a shutter onto a two-dimensional surface of the image sensor 102. An electrical output of the sensor carries an analog signal resulting from scanning individual photo-detectors of the surface of the sensor 102 onto which the image is projected. Signals proportional to the intensity of light striking the individual photo-detectors or pixel sensors are obtained in the output in time sequence, typically by scanning them in a raster pattern, where the rows of photo-detectors are scanned one at a time from left to right, beginning at the top row, to generate a frame of video data from which the image may be reconstructed.

A/D 104 receives analog image signal, converts analog image signal to digital form, and outputs a digital image signal. Digital image signal includes digital representations of each of the analog values generated by the pixel sensors or photo-detectors, and thus includes a digital representation of the captured image.

Image processor 106 performs image processing of the digital pixel sensor data received in digital image signal. For example, image processor 106 may be used to generate pixels of all three colors at all pixel positions when a Bayer pattern image is output by image sensor device 102.

Note that in an embodiment, two or more of image sensor device 102, A/D 104, and image processor 106 may be included together in a single IC chip, such as a CMOS chip, particularly when image sensor device 102 is a CMOS sensor, or may be in two or more separate chips.

CPU 114 is shown in FIG. 1 as coupled to each of image processor 106, audio codec 112, RF transceiver 116, display 120, storage 124, and keypad 126. CPU 114 may be individually connected to these components, or one or more of these components may be connected to CPU 114 in a common bus structure.

Microphone 110 and audio CODEC 112 may be present in some applications of mobile device 100, such as mobile phone applications and video applications (e.g., where audio corresponding to the video images is recorded). Microphone 110 captures audio, including any sounds such as voice, etc. Microphone 110 may be any type of microphone. Microphone 110 generates an audio signal that is received by audio codec 112. The audio signal may include a stream of digital data, or analog information that is converted to digital form by an analog-to-digital (ND) converter of audio codec 112. Audio codec 112 encodes (e.g., compresses) the received audio of the received audio signal. Audio codec 112 generates an encoded audio data stream that is received by CPU 114.

CPU 114 receives image processor output signal from image processor 106 and receives the audio data stream from audio codec 112. In some embodiments, CPU 114 may include an additional image processor. In one embodiment, the additional image processor performs image processing (e.g., image filtering) functions for CPU 114. In an embodiment, CPU 114 includes a digital signal processor (DSP), which may be included in the additional image processor. When present, the DSP may apply special effects to the received audio data (e.g., an equalization function) and/or to the video data. CPU 114 may store and/or buffer video and/or audio data in storage 124. Storage 124 may include any suitable type of storage, including one or more hard disc drives, optical disc drives, FLASH memory devices, etc. In an embodiment, CPU 114 may stream the video and/or audio data to RF transceiver 116, to be transmitted from mobile device 100.

When present, RF transceiver 116 is configured to enable wireless communications for mobile device 116. For example, RF transceiver 116 may enable telephone calls, such as telephone calls according to a cellular protocol. RF transceiver 116 may include a frequency up-converter (transmitter) and down-converter (receiver). For example, RF transceiver 116 may transmit RF signals to antenna 118 containing audio information corresponding to voice of a user of mobile device 100. RF transceiver 116 may receive RF signals from antenna 118 corresponding to audio information received from another device in communication with mobile device 100. RF transceiver 116 provides the received audio information to CPU 114. In another example, RF transceiver 116 may be configured to receive television signals for mobile device 100, to be displayed by display 120. In another example, RF transceiver 116 may transmit images captured by image sensor device 102, including still and/or video images, from mobile device 100. In another example, RF transceiver 116 may enable a wireless local area network (WLAN) link (including an IEEE 802.11 WLAN standard link), and/or other type of wireless communication link.

CPU 114 provides audio data received by RF transceiver 116 to audio codec 112. Audio codec 112 performs bit stream decoding of the received audio data (if needed) and converts the decoded data to an analog signal. Speaker 108 receives the analog signal, and outputs corresponding sound.

Image processor 106, audio codec 112, and CPU 114 may be implemented in hardware, software, firmware, and/or any combination thereof. For example, CPU 114 may be implemented as a proprietary or commercially available processor that executes code to perform its functions. Audio codec 112 may be configured to process proprietary and/or industry standard audio protocols. Image processor 106 may be a proprietary or commercially available image signal processing chip, for example.

Display 120 receives image data from CPU 114, such as image data generated by image processor 106. For example, display 120 may be used to display images captured by image sensor device 102. Display 120 may include any type of display mechanism, including an LCD (liquid crystal display) panel or other display mechanism. In some embodiments, the display may show a preview of images currently being received by the sensor 102, whereby a user may select a control (e.g., shutter button) to begin saving captured image(s) to storage 124.

Depending on the particular implementation, image processor 106 formats the image data output in image processor output signal according to a proprietary or known video data format. Display 120 is configured to receive the formatted data, and to display a corresponding captured image. In one example, image processor 106 may output a plurality of data words, where each data word corresponds to an image pixel. A data word may include multiple data portions that correspond to the various color channels for an image pixel. Any number of bits may be used for each color channel, and the data word may have any length.

In some implementations, display 120 has a display screen that is not capable of viewing the full resolution of the images captured by image sensor device 102. Image sensor devices 102 may have various sizes, including numbers of pixels in the hundreds of thousand, or millions, such as 1 megapixel (Mpel), 2 Mpels, 4 Mpels, 8 Mpels, etc.). Display 120 may be capable of displaying relatively smaller image sizes.

To accommodate such differences between a size of display 120 and a size of captured images, CPU 114 may down-size a captured image received from image processor 106 before providing the image to display 120, in some embodiments. Such image downsizing may be performed by a subsampling process. In computer graphics, subsampling is a process used to reduce an image size. Subsampling is a type of image scaling, and may alter the appearance of an image or reduce the quantity of information required to store an image. Two types of subsampling are replacement and interpolation. The replacement technique selects a single pixel from a group and uses it to represent the entire group. The interpolation technique uses a statistical sample of the group (such as a mean) to create a new representation of the entire group.

As stated above, image processor 106 performs processing of digital image signal and generates an image processor output signal. Image processing may include lens shading correction processes performed by a lens shading sub-module 107 of an image processor 106, in one embodiment.

Due to lens shading, along a given radius, the farther away from the center of the image sensor, the more attenuated the signal from a given pixel circuit becomes. Moreover, pixels with different spectral characteristics have different responses to the lens shading, which may cause appearance of color patches even if the scene is monochromatic. As such, correction is applied in order to reduce the spatial variation. By applying a gain to attenuated signals according to position, embodiments of the present disclosure perform positional gain adjustment. A function that maps pixel position into a desired correction amount is referred to herein as a lens shading gain adjustment surface. In one embodiment, the surface may consist of an interleaving of several smooth surfaces, each for every pixel type or color. Such a surface may be generated in a CMOS circuit, in one embodiment, and then used for correction of spatial non-uniformity sensitivity of the lens shading correction across pixel positions in the sensor array.

In conventional processes, shading correction factors for an optical photo system (e.g., lens, image sensor, and/or housing) of a mobile device 100 are performed by imaging a scene of uniform intensity onto the image sensor 102 employed by the device being calibrated. Data of the resulting circular, hyperbolic or other variation across the image sensor device (see FIG. 2) are derived by prior measurement of image sensor photo detector signals and a compensating mathematical function or functions are calculated and stored under optimal lab conditions, where imaging a scene of uniform intensity is possible.

Accordingly, by capturing an image of a scene that is known to be a flat illumination field, an actual response may be measured from the image. In some embodiments, a response may be measured in each of the color planes—red, green, blue. The response will show that at its center, the response is strongest and weaker at its edges. Accordingly, pixels corresponding to the edges of the pixel sensor may be multiplied by a relative corrective factor so that the corrected response is flat, after correction. These correction factors may be used for captured image(s) acquired under similar illumination conditions.

However, illumination conditions change as the environment of the mobile device changes. Further, different lens positions within the optical system 101 produce different lens shading effects. Accordingly, different correction factors may need to be adjusted to compensate for different light sources within a scene being photographed and/or lens positioning or qualities (e.g., changes in zoom or focus, particular manufacturing accuracies, mounting of lens, filter consistencies, etc.). Therefore, a tuning process to premeasure all the different combinations of potential different positions and light sources will be complicated and not accurate, since for each combination, the actual response measured from a captured image is different.

In contrast, with embodiments of the present disclosure, shading correction factors for an optical photo system, (e.g., the lens, image sensor, and/or housing) of a digital camera or other imaging device, are performed by capturing multiple images of a scene in succession. By analyzing the differences between intensity values of the captured images and the shift in detected intensity values with respect to the pixels, the lens shading effect on the camera may be better understood and represented. Therefore, in one embodiment, capturing two images of a same scene with slight camera shift between image captures provides the reference from which the lens shading gain adjustment can be estimated, as represented in FIG. 3. In particular, a lens shading curve or surface may be determined that caused the differences between the captured images. In some embodiments, preview images captured for display on a viewfinder of a mobile device 100 may be used to determine the lens shading effect, where upon capturing of an image (e.g., after selecting a shutter button or control), the captured image may be corrected to compensate for the current lens shading effect. For example, a series of low resolution images may be used to preview the image to the photographer before actually taking a high resolution image. Then, data of the resulting circular, hyperbolic or other variation across the image sensor 102 are derived by dynamic measurement of image sensor photo detector signals and a compensating mathematical function or functions are calculated.

By implementing such a process, lens shading phenomenon may be estimated dynamically and on the fly. Accordingly, during manufacturing and assembly of a camera or mobile device equipped with a camera, resources used for corrective lens shading calibration may be eliminated or significantly reduced.

While conditions may often exist that allow for capturing of images that can be used to estimate lens shading, in some situations, conditions may not be present to capture an image that allows for sufficient estimation of lens shading. As an example, an image may be captured where an object in a scene is moving (as opposed to the camera moving). Accordingly, a subsequent capturing of the scene is going to be quite dissimilar, since the scene is not static. Further, the lens shading sub-module 107 and/or the image processor 106 may detect that illumination types for the captured images are not the same. Accordingly, the lens shading sub-module 107 and/or image processor 106 may attempt to detect that lighting conditions are stable during capturing of the images that are used to derive the lens shading surface. For example, in a subsequent image, maybe someone turned out the lights in a room where a scene is being captured. In such a situation, the mobile device 100 may rely on prestored lens shading correction factors that are suited for a similar illumination type. The lens shading sub-module 107 and/or the image processor 106 may add lens shading correction factors to a reference database 125 of storage 124 periodically as factors are dynamically generated that are determined to be suitable to be used in future uses, where conditions do not allow for suitable correction factors to be newly generated. Also, in some embodiments, lens shading correction factors may be preloaded or stored in the reference database 125 at a manufacturing facility so that the camera is equipped with preliminary lens shading correction factors that can be used, as needed.

In one embodiment, the correction factors may be initially generated responsive to capturing a scene in a flat field (e.g., a white wall with desired illumination) within a closed environment. Also, since the correction factors captured at the manufacturing facility is used as a secondary measure and is not intended to be used as a primary tool for estimating the lens shading, the scene does not necessarily need to be a perfectly flat field, in one embodiment. Therefore, the motion based calculation may be performed in the manufacturing stage with relatively flat surfaces which make measurements faster to obtain and less dependent on measurement conditions. In other words, a wider range of manufacturing conditions are available to be used with systems and processes of the present disclosure.

As stated above, the lens shading sub-module 107 and/or the image processor 106 may detect conditions that do not allow for sufficient measuring of lens shading. Accordingly, in a case where good conditions are not present to measure lens shading, the mobile device 100 takes advantage of a lens shading correction factors stored in the reference database 125. Alternatively, in a case where good conditions are present, the lens shading sub-module 107 and/or the image processor 106 compares the differences between recently captured images and defines a lens shading surface (on the fly) by considering the differences between the captured images. Then, a lens surface gain adjustment surface may be chosen that matches the intensity variation or variations reflected by the lens shading surface across the captured images that are to be corrected. The mobile device 100 may also store the lens surface gain adjustment surface in the reference database 125 for later use, as needs arise.

Additionally, one embodiment of dynamic lens shading calculation utilizes gradients or differences between captured image areas to define the lens shading surface. From the lens shading surface, corrections may be prepared to compensate for the effects of lens shading. To determine the lens shading surface, inter image consideration and/or intra image consideration are evaluated. For inter image consideration, in one embodiment, two images are captured and a ratio is calculated between a pixel value in an object in one image and the pixel value of the same place on the same object in a second image (that was taken after camera motion). The calculated ratio represents a local gradient of the lens shading at the direction of the camera or mobile device movement.

For intra image consideration, areas with similar colors in the image and/or similar intensities may be used to estimate a portion of lens shading surface from differences in the areas of the same image. Accordingly, in one embodiment, via the lens shading sub-module 107 and/or the image processor 106, a second ratio is calculated, between a pixel value of an object in an image and the pixel value of another object that has similar luminosity in the image. The calculated second ratio represents the gradient of the lens shading between these two points.

For inter image consideration, in order to find matching pixel values in the two images, the images are geometrically matched with one another. In one embodiment, global motion is detected from the two images, where motion parameters may include translation and transformation (e.g., affine or perspective). Then, areas having local motion that is different from the global motion are determined. These areas may have had an object moving in the camera field or scene being photographed. In one embodiment, areas having local motion are not analyzed for gradients. In such a situation, a correction factor may be determined by extrapolation from other areas (not subject to the local motion) in single image data. Also, if intra image analysis is not available on the single image (e.g., the size of the image exceeds a threshold), a captured image may be compensated using stored lens shading correction factors in the reference database 125 instead of determining correction factors dynamically, in one embodiment.

The foregoing processes may be iterative in some embodiments, where after an estimation of lens shading, processes may be repeated to determine a new estimation of lens shading. In some embodiments, the motion detection is performed on full resolution image(s). After the motion detection is estimated, then a first image is transformed to match the second image geometrically and matching pixel values are attempted to be found.

Gradients and pixel ratios may be affected with noise and inaccuracies. For example, possible sources of noise include pixel noise (i.e., electronic and photonic noise in the process of converting luminance to digital luminance count), errors in estimation of motion, changing light condition between the two images, an object has different reflection into the camera in different positions, an incorrect assumption on similar luminance, where in reality two objects being compared have different luminance, etc. To avoid or reduce such inaccuracies, measures may be taken to calculate the gradients in ‘flat’ areas where there are no rapid changes in luminance (e.g., edges). For example, areas near edges in the captured images may be masked out.

Then, the lens shading surface may be calculated from the local gradients, in some embodiments. In one embodiment, a model of a lens shading surface may be computed or estimated that matches the measured gradients in the captured images. Accordingly, parameters of the model may be adjusted until an optimal result is determined from all the tested results or trials.

For example, one possible technique determines an optimized analytical parametric surface by selecting a surface model equation (e.g., polynomial, Gaussian, etc.) and calculating the parameters of the lens shading surface model that yield minimal difference between the surface gradient (according to the model) and measured gradients. Another possible technique, among others, determines an optimized singular value decomposition (SVD) surface composition by select largest surface eigenvectors and calculating the coefficients for the surface composition that yield minimal difference between the surface gradient (according to the model) and measured gradients.

To illustrate, a Gaussian model may be used to model the lens shading being experienced by the mobile device 100 and values of parameters for the model may be adjusted until an optimal match is found between the model and the measured values. Instead of a Gaussian model, other models may be considered, such as a polynomial surface model, in some embodiments. Alternatively, an SVD process may be used.

To match the pixel values, color layers may be estimated using a variety of techniques, including direct layer estimation, independent color layer estimation, and normalized color domain estimation. For instance, measurements may be made in normalized color domain(s) where possible, since, typically, luminosity changes more rapidly than normalized color in images. Additional measurements include calculating small number of surface model parameters with large number of measurements; limiting parameter space to a predefined space according to measurements of sample units in different light (spectra and intensity) conditions; averaging measurements before calculating gradients (e.g., by down sampling the image); calculating global gradients rather than using only local gradients; and segmenting the image to a small number of color segments, and estimating global gradients on each one. Also, in some embodiments, possible effects of light flickering during capturing of the images may be addressed and removed from the images.

Accordingly, in one embodiment, one technique of matching pixels involves direct layer estimation, where local gradients are calculated. In particular, in the inter image consideration, differences between the images represents the local gradients. In the intra image consideration, color segments are derived and differences between like color segments are representative of local gradients. An optimized lens shading surface is modeled which matches with local gradients at measured points. Accordingly, a model surface may be computed that fits the local gradients of each of the color segments. From inter and/or intra image considerations, information may be obtained on the gradients at each corresponding sensor point of the image, where the gradients are representative of the lens shading phenomenon. By taking gradients from inter image and/or intra image calculations and optimizing according to the two respective sets, a lens shading surface can be estimated and applied to a captured image.

In some embodiments, lens shading correction for color image sensors may be defined for each of a plurality of color channels in order to correct for lens shading variations across color channels. Further, different techniques or models may be used by the lens shading sub-module 107 and/or the image processor 106 for the different color channels. As an example, a green channel may determine a best fit of color plan parameters for an SVD model and a red/blue channel may utilize direct layer optimization. In general, once a lens shading surface has been determined, then lens shading can be corrected using standard correction methods.

Further, with estimation of the lens shading surface or curve, other image quality processes may be benefited. For example, by knowing the lens shading surface, accurate white balancing may be computed. As discussed above, different light sources create different lens shading. Therefore, by determining the lens shading correctly, an unbiased measurement for the white balance can be provided by the mobile device 100.

To illustrate, a white balance may be selected that is appropriate to generate the estimated lens shading curve, where different illuminants have different optical wavelength responses and hence may result in different lens shading surfaces. The image processor 106 or an auto-white-balance (AWB) sub-module of the image processor 106 may then determine the type of illuminant used to generate the lens shading curve that has been estimated and subsequently use this information to correct white balance levels in captured image(s).

In addition to performing accurate white balancing, more robust motion estimation may also be implemented responsive to the lens shading estimation by the lens shading sub-module 107 and/or the image processor 106. For example, from analysis performed in determining the lens shading phenomenon, global motion can be estimated by calculating a mean difference between image areas in the two images captured in a sequence, where the difference corresponds to the same object moving across one image to a different place in the second image. Since the second image has different lens shading characteristics as compared to the first, it also has a different mean brightness as compared to the first image. Accordingly, instead of examining correlations between the images in order to determine a motion vector that can be used to estimate camera motion, statistics used to determine the lens shading can also be used to estimate the camera motion. Therefore, differences in the statistics between the images may be used to calculate the camera or global motion.

As an example, the first image may feature a white ball at a left corner of the frame. The second image may feature the ball at a position to the right of the left corner, where the ball has a brighter intensity than in the first frame. The lens shading for the mobile device 100 has been determined, where the lens shading is found to traverse along one side of the image sensor 102 to the other side. Accordingly, at a pixel sensor corresponding to the left corner of the image, the average intensity value is going to be lower than an average intensity value at a pixel sensor to the right. Therefore, based on the lens shading statistics, it is expected that the intensity values of pixels corresponding to the ball will change based on the lens shading as the ball moves to the right in subsequent images. Therefore, by considering the global and local statistics compiled on the captured images, an object having a different intensity value than a prior value in a prior frame may be determined to be the same object in motion due to the lens shading phenomenon (that has been previously computed). As a result, motion can be analyzed and determined.

FIG. 4 illustrates a flow chart depicting a process of estimating lens shading in accordance with the disclosed embodiments. Lens shading estimation, in accordance with FIG. 4, is performed by a pixel processing pipeline of image processor 106 (FIG. 1) (e.g., lens shading module 107) dynamically and, if necessary, stored references surface(s) acquired during a calibration operation. The image processor 106 has access to the stored gain adjustment surface(s) and scene adjustment surface(s) in, for example, reference database 125 (FIG. 1) or other memory storage.

When an image is generally captured by a digital camera, the image is not captured in a known illumination type or a reference is not available for the current illumination type. The captured image is a natural image, where the lens shading sub-module 107 of an image processor 106 does not have any preset knowledge of the illumination type and may not therefore have a reference correction surface prestored according to the current illumination type. While in conventional processes, lens shading correction factors may be solely derived from capturing a scene of a flat field to create an image that contains even color and intensity values except for effects from lens shading, natural images taken by the camera normally have no such flat areas in the image. Accordingly, embodiments of the present disclosure analyze the differences in light transition from natural images captured by the mobile device 100.

As such, embodiments of the present disclosure take advantage of capturing multiple images in succession and determining a lens shading correction or gain adjustment surface for the present illumination conditions. In particular, since the images are captured by the same image sensor 102 of the mobile device 100, the images are captured using the same optics. Accordingly, intensity values of pixels for the multiple images should ideally be the same, and illumination levels for the captured images should also be the same, since the images are captured within parts of a second from one another, in some embodiments. In practice, the mobile device 100 may move or shift during the capturing of one image to the next. Also, due to lens shading, the intensity values of the pixels may not be exactly the same.

Accordingly, by analyzing the differences between intensity values of the captured images and the shift in detected intensity values with respect to the pixels of the captured images, the lens shading effect on the mobile device 100 may be better understood and represented. Therefore, in one embodiment, capturing two images of a same scene with slight camera shift between image captures provides the reference from which the corrective lens shading surface can be estimated. In particular, a lens shading curve or surface may be determined that caused the differences between the captured images. In some embodiments, preview images captured for display on a viewfinder of a camera may be used to determine the lens shading effect, where upon capturing of an image (e.g., after selecting a shutter button or control), the captured image may be corrected to compensate for the current lens shading effect.

Lens shading estimation begins with capturing a sequence of images at step 402. At step 404, local gradients of the captured images are determined. As noted above, the local gradients may be determined in a number of different ways. In some embodiments, techniques estimate the local gradients from inter image consideration and/or intra image consideration.

For example, multiple images may be captured and inter image analysis may be performed on the captured images. In addition, intra image analysis may be performed on each captured image. The intra image analysis may be performed in concert with the inter image analysis or apart from the inter image analysis, in some embodiments, based on recognition of a particular condition. For instance, a sequence of images may have been subjected to a level of local motion in the scene being photographed that does not allow for adequate statistics to be obtained. Alternatively, adequate statistics for global motion may not be able to be obtained which prohibits one or both approaches from being used or causes prestored statistics or factors in the reference database 125 to be used instead. As an example, a single image may not contain multiple areas with similar colors or intensity.

Referring back to FIG. 4, at step 406, a model of a lens shading surface is compared to the measured gradients from the captured images and the deviation between the two is saved for later comparison. The process proceeds to step 408 where the model is adjusted and compared again with the measured gradients and new deviation(s) are computed and compared with the saved values. The model having the set of smallest deviation values is maintained as the optimum model for the trials previously computed. The process then repeats until an optimum model is determined.

At step 410, a lens shading gain adjustment surface is calculated from the lens shading surface. For embodiments that derived a lens shading surface for each color channel of the image sensor, the lens shading gain adjustment surface may also be determined for each color channel. In other words, lens shading correction for color image sensors may be defined for each of a plurality of color channels in order to correct for lens shading variations across color channels. For these color image sensors, the lens shading gain adjustment surface is applied to the pixels of the corresponding color channel during post-image capture processing to correct for variations in pixel value due to the spatial location of the pixels in the pixel array. In some embodiments, monochrome image sensors, on the other hand, apply a single gain adjustment surface to all pixels of a pixel array. Likewise, color image sensors may use a single lens shading gain adjustment surface across all color channels, in some embodiments.

To illustrate, a pixel value located at x, y pixel coordinates may be multiplied by the lens surface gain adjustment values at the x, y pixel coordinates on the lens surface gain adjustment surface. Accordingly, at step 412, lens shading correction is performed on the pixel values of the captured image using the lens surface gain adjustment surface(s).

In some embodiments discussed above, a lens shading module 107 is provided to estimate the effects of lens shading and to possibly correct the gain of individual pixels in captured images. The lens shading module 107 may, for example, be implemented as software or firmware.

The lens shading module 107 may be implemented in image processor 106 as software designed to implement lens shading correction, in one embodiment. Alternatively, lens shading module 107 may be implemented in image sensor 102, in one embodiment.

In some embodiments, the lens shading module 107 utilizes lens shading correction surfaces to determine gain correction for individual pixels to account for lens shading. An individual correction or gain adjustment surface may, for example, comprise parameters to calculate gain correction although it will also be understood that in some cases a correction table may be stored. Positional gain adjustments across the pixel array can be provided as digital gain values, one corresponding to each of the pixels. It may happen that the further away a pixel is from the center of the pixel array, the more gain is needed to be applied to the pixel value. The set of digital gain values for the entire pixel array forms a lens shading gain adjustment surface.

In some embodiments, only a relatively few gain values are preferably stored, in order to minimize the amount of memory required to store correction data, and a determination of values between the stored values is obtained, during the image modification process, by a form of interpolation. In order to avoid noticeable discontinuities in the image intensity, these few data values are preferably fit to a smooth curve or curves that are chosen to match the intensity variation or variations across the image that are to be corrected.

Also, in some embodiments, the digital gain values are computed from an expression that approximates the desired lens shading gain adjustment surface, since the number of parameters needed to generate an approximate surface is generally significantly lower than the numbers of parameters needed to store the digital gain values for every pixel location. Some image sensors 102 have built-in lens shading operation on-chip, while other image sensors rely on a separate image processing imaging chip for this operation.

FIG. 5 is a flowchart representation of a method in accordance with one embodiment of the present disclosure. In particular, a method is presented for use in conjunction with one or more of the functions and features described in conjunction with FIGS. 1-3. In step 502, a lens shading surface is continually calculated for preview images being displayed by a mobile device 100. When the calculated lens shading surface is determined to be satisfactory (e.g., no local motion detected, illumination of scene deemed to be stable, etc.), the lens shading surface is stored in a reference database 125, in step 504. Accordingly, upon selection to capture an image, a newly calculated lens shading surface is used to compensate for lens shading effects in the captured image, in step 506 if the newly calculated lens shading surface was determined to be satisfactory. Otherwise, a lens shading surface prestored in the reference database 125 is used to compensate for lens shading effects in the captured image, in step 508.

Next, FIG. 6 is a flowchart representation of a method in accordance with one embodiment of the present disclosure. In particular, a method is presented for use in conjunction with one or more of the functions and features described in conjunction with FIGS. 1-3. In step 602, two or more images of the same scene, containing objects, are captured, where the images have some displacement between themselves. In step 604, the relative displacement between the images is analyzed based on tracking of image areas with details or discernable objects. In step 606, for each point and for each color plane in the image, a ratio between an intensity level at the first image and the level of the same object point at the second image is calculated. If there were no lens shading, the values would be the same. In step 608, the differences to the values are normalized, and in step 610, from the calculated ratios, a difference surface profile is created by possibly filtering the results and interpolating data points or values, as needed, to produce a smooth lens shading surface profile. As a point of reference, FIG. 7 is a representative lens shading surface profile that may be created depending on the scene being photographed and the particular illumination characteristics. In step 612, after the lens shading surface is extracted or generated from a current scene, the lens shading surface is used to indicate the light source illuminating the scene and supply an unbiased measurement for the white balance.

While conventional lens correction processes are characterized by poor performances; inaccurate estimation of spectra from white balance (e.g., different spectra can have same white balance but different lens shading); inaccurate measurement extrapolation during manufacturing; costly tuning or calibration process, limited in applicability to fixed focus lenses (e.g., fixed optical patterns), etc., dynamic lens shading estimation and correction methods disclosed herein improve upon the foregoing drawbacks. As flex focusing or zoom controls gain popularity with digital cameras and become more complicated, dynamic estimation of lens shading based on current image captures and not preset measurements will provide improved accuracy over current conventional processes. Contemplated advantages include improved image quality with low quality lenses in cellular phones and other camera applications; shorter time to market; shorter calibration process of the camera in the product development stage; and reduction of manufacturing cost to the camera vendor due to shorter or non-calibration process per sensor.

Mobile device 100 may comprise a variety of platforms in various embodiments. To illustrate, a smart phone electronic device 100a is represented in FIG. 8, where the smart phone 100a includes an optical system 101, at least one imaging device or sensor 102, at least one image processor 106 with lens shading sub-module 107, a power source 122, among other components (e.g., display 120, processor 114, etc.). Further, a tablet electronic device 100b is represented in FIG. 9, where the tablet 100b includes an optical system 101, at least one imaging device or sensor 102, at least one image processor 106 with lens shading sub-module 107, a power source 122, among other components (e.g., display 120, processor 114, etc.). Then, a laptop computer 100c is represented in FIG. 10, where the laptop computer 100c includes an optical system 101, at least one imaging device or sensor 102, at least one image processor 106 with lens shading sub-module 107, a power source 122, among other components (e.g., display 120, processor 114, etc.). Also, a digital camera electronic device 100d is represented in FIG. 11, where the digital camera 100d includes an optical system 101, at least one imaging device or sensor 102, at least one image processor 106 with lens shading sub-module 107, a power source 122, among other components (e.g., display 120, processor 114, etc.). Therefore, a variety of platforms of electronic mobile devices may be integrated with the image processor 106 and/or lens shading sub-module 107 of the various embodiments.

Embodiments of the present disclosure can be implemented in hardware, software, firmware, or a combination thereof. In some embodiments, the lens shading sub-module 107 is implemented in software or firmware that is stored in a memory and that is executed by a suitable instruction execution system. In some embodiments, the lens shading sub-module 107 comprises an ordered listing of executable instructions for implementing logical functions and can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any means that can contain, store, communicate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a nonexhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical).

If implemented in hardware, as in an alternative embodiment, the lens shading sub-module 107 can be implemented with any or a combination of the following technologies, which are all well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc.

The flow chart of FIGS. 4-6 shows the architecture, functionality, and operation of a possible implementation of the image processor 106 and relevant sub-modules. In this regard, each block represents a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order noted in FIGS. 4-6. For example, two blocks shown in succession in FIGS. 4-6 may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved, as will be further clarified hereinbelow.

It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiments without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims

1. An image processing method, comprising:

capturing a sequence of images via an image sensor, wherein the captured images have a global motion shift between them;
tracking a relative global motion displacement between the captured images;
calculating a ratio between an intensity level at a first image of the captured images and an intensity level of a same object point at a second image of the captured images; and
based on the ratios calculated, determining a lens shading surface profile.

2. The method of claim 1, further comprising:

determining a level of light source illuminating a scene recorded in the captured images and providing a measurement for white balance in the captured images.

3. An image processing system, comprising:

an image sensor to capture image data; and
at least one image processor that receives a plurality of captured image data and detects a global motion that is present during capturing of the captured image data, wherein the global motion is used to estimate effects of lens shading on the captured image data.

4. The system of claim 3, wherein the captured image data is not characterized by a flat illumination field.

5. The system of claim 3, wherein the at least one image processor dynamically corrects subsequently captured image data for the effects of lens shading.

6. The system of claim 5, wherein the at least one image processor corrects newly captured image data for the effects of lens shading using prestored corrective factors that are not based on the captured image data when a condition in which the captured image data is captured is determined to not be conducive to using corrective factors based on the captured image data.

7. The system of claim 5, wherein the at least one image processor corrects newly captured image data for the effects of lens shading using corrective factors that are based on the captured image data from a single image when a condition in which the captured image data is captured is determined to not be conducive to using corrective factors based on the captured image data from multiple images.

8. The system of claim 3, wherein the at least one image processor determines a white balance level of the captured image data based on an estimation of the effects of lens shading.

9. The system of claim 3, wherein the at least one image processor determines a motion estimation of the image sensor that captured the captured image data based on an estimation of the effects of lens shading.

10. The system of claim 3, wherein the at least one image processor continually captures the captured image data as part of a preview mode and continually determines the effects of lens shading during the preview mode.

11. An image processing method, comprising:

receiving at least one captured image via an image sensor;
detecting a global motion that is present during capturing of the at least one captured image; and
computing an estimate of effects of lens shading on the at least one captured image from changes in intensity values of pixels in the at least one captured image during a global motion shift.

12. The method of claim 11, wherein responsive to selecting to forgo computing the estimate of lens shading using inter image analysis of a plurality of captured image data, the estimate of the effects of lens shading is computed using intra image analysis of a single captured image, wherein the at least one captured image is the single captured image.

13. The method of claim 11, wherein the estimate of the effects of lens shading is computed using at least inter image analysis of a sequence of captured image data, wherein the at least one captured image is the sequence of captured image data.

14. The method of claim 11, further comprising:

dynamically correcting subsequently captured image data for the effects of lens shading that has been previously computed for current lighting conditions.

15. The method of claim 14, further comprising:

correcting newly captured image data for the effects of lens shading using prestored corrective factors that are not based on the at least one captured image when a condition in which the at least one captured image is captured is determined to not be conducive to using corrective factors based on the at least one captured image.

16. The method of claim 15, wherein the condition comprises a changing illumination level in the at least one captured image, wherein the at least one captured image comprises a plurality of captured images.

17. The method of claim 15, wherein the condition comprises a local motion being detected in a scene that is a subject of the at least one captured image.

18. The method of claim 11, further comprising:

determining a white balance level of the at least one captured image based on an estimation of the effects of lens shading.

19. The method of claim 11, further comprising:

determining a motion estimation of the image sensor that captured the at least one captured image based on an estimation of the effects of lens shading.

20. The method of claim 11, wherein the at least one captured image comprises a plurality of captured images that are continuously captured as part of a preview mode and the effects of lens shading are continuously determined during the preview mode.

Patent History
Publication number: 20130021484
Type: Application
Filed: Dec 19, 2011
Publication Date: Jan 24, 2013
Applicant: BROADCOM CORPORATION (Irvine, CA)
Inventors: Noam Sorek (Zichron Yacoov), Ilia Vitsnudel (Even Yehuda)
Application Number: 13/330,047
Classifications
Current U.S. Class: Motion Correction (348/208.4); 348/E05.031
International Classification: H04N 5/228 (20060101);