Per-channel image intensity correction

- NVIDIA CORPORATION

Techniques for per-channel image intensity correction includes linear interpolation of each channel of spectral data to generate corrected spectral data.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
BACKGROUND OF THE INVENTION

Computing devices have made significant contributions toward the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Numerous devices, such as digital cameras, computers, game consoles, video equipment, hand-held computing devices, audio devices, telephones, and navigation systems have facilitated increased productivity and reduced costs in communicating and analyzing data in most areas of entertainment, education, business and science. The digital camera and camcorders, for example, has become popular for personal use and for use in business.

FIG. 1 shows an exemplary digital camera according to the conventional art. The digital camera 100 typically includes one or more lenses 110, one or more image sensor arrays 130, an analog-to-digital converter (ADC) 140, a digital signal processor (DSP) 150 and one or more computing device readable media 160. The image sensor 130 includes a two-dimension array of hundreds, thousand, millions or more of imaging sensors, which each convert light (e.g. photons) into electrons. The array of sensor cells are typically arranged in a pattern of red, green and blue cells The image sensor 130 may be a charge coupled device (CCD), complementary metal oxide semiconductor (CMOS) device, or the like. Referring now to FIG. 2, an exemplary Bayer CMOS sensor array is illustrated. In the array, rows of red and green sensor cells 210, 220 are interleaved with rows of blue and green sensor cells 230, 240. In a CMOS sensor array, the sensor cells are separated by sense lines 250, 260. In CCD arrays, sense lines are not formed between the rows and/or columns of cells, therefore the cells are formed right next to each other.

A continual issue when dealing with cameras and other optical devices is the distortion introduced by the lens, image sensor arrays and the like of the camera itself. Many different kinds of distortion can occur, and are familiar problems for camera designers and photographers alike.

Several approaches are traditionally used, when correcting distortion. In more expensive cameras, such as single-lens reflex (SLR) cameras, combinations of lenses are used in sequence, with each additional piece of glass often designed to reduce or eliminate a particular type of distortion. Less expensive cameras offer correspondingly fewer hardware fixes for the distortion introduced by their lenses, with integrated solutions, such as mobile phone cameras, having almost no inherent distortion correction.

Distortion can also be corrected after an image has been captured. Digital imagery, such as the pictures and video captured by digital cameras and camcorders, can be manipulated after the image has been taken, and the distortion introduced by the camera itself can be reduced.

Referring again to FIG. 1, light coming through the lens 110 and forming an image of on the image sensor 130 will typically be unevenly attenuated across the image plane and color spectrum due to imperfections in the lens 110, filter 120 and image sensor 130. Therefore, the DSP 150 applies a high order two-dimensional polynomial interpolation across the image plane. The two-dimensional polynomial f(x,y), however, can be expensive to calculate and use. Furthermore, the two-dimensional polynomial are often numerically unstable and posses other undesirable properties. Accordingly, there is a continuing need for improved imaging processing techniques that provide image intensity correction.

SUMMARY OF THE INVENTION

Embodiments of the present technology are directed toward techniques for per-channel image intensity correction. In one embodiment, a method of performing per channel image intensity correction includes receiving spectral data for a given image. Linear interpolation is applied to each channel of the spectral data to generate corrected spectral data for the given image. The corrected spectral data for the given image may then be output for storage on computing device readable media, for further processing, or the like.

In another embodiment, an imaging system includes one or more lenses, one or more image sensor arrays and a linear interpolator. The one or more image sensor arrays measure spectral data for the given image focused on the arrays by the one or more lenses. The linear interpolator generates corrected spectral data for each channel of the spectral data of the given image.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present technology are illustrated by way of example and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:

FIG. 1 shows a block diagram of an exemplary digital camera according to the conventional art.

FIG. 2 shows a block diagram of an exemplary Bayer CMOS sensor array.

FIG. 3 shows a block diagram of an exemplary image capture portion of a digital camera or camcorder.

FIG. 4 shows a block diagram of an exemplary image sensor array of a digital camera or camcorder.

FIG. 5 shows a graph of an exemplary distortion profile across the image plane.

FIG. 6 shows a block diagram of an exemplary digital camera or camcorder, in accordance with one embodiment of the present technology.

FIG. 7 shows a flow diagram of a method of performing per channel image intensity correction, in accordance with one embodiment of the present technology.

FIGS. 8A and 8B show a block diagram of an exemplary image plane (e.g., sensor array) divided into a plurality of patches.

FIG. 9 shows a block diagram of a bi-cubic interpolation of an exemplary spline surface.

DETAILED DESCRIPTION OF THE INVENTION

Reference will now be made in detail to the embodiments of the present technology, examples of which are illustrated in the accompanying drawings. While the present technology will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present technology, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, it is understood that the present technology may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present technology.

Referring to FIG. 3, an exemplary image capture portion 300 of a digital camera or camcorder is shown. The image capture portion 300 includes one or more lenses 310, 320, and one or more image sensors 330 mounted in an enclosure 340. The enclosure 340 holds the lenses 310, 320 and image sensors 330 in fixed relationship to each other. The enclosure 340 typically has corrugate sidewalls and fittings to hold the lenses 310, 320 and/or image sensors 330. Referring to FIG. 4, an exemplary image sensor 330 of a digital camera or camcorder is shown. The image sensor 330 includes a plurality of sensor cells 410, 420. Each cell 410, 420 detects the light (e.g., photon) intensity of a given color of light. In an exemplary Bayer image sensor, rows of red and green sensor cells are interleaved with rows of blue and green sensor cells. A CMOS type image sensor array also includes sense lines 450 formed between the rows and/or columns of cells. Therefore, the image sensor array may also include cell lens 430, 440 disposed on each cell 410, 420. In particular, cell lenses (e.g., lenticular array) having a wedge shape may be disposed on each image cell to focus the light proximate each image cell and corresponding portion of the sense line area into each corresponding image cell.

The corrugated sidewalls and fittings of the housing and the like tend to cause vignetting of the image at the image sensor 330. In addition, the lenses 310, 320 tend to cause distortion across the plane of the image sensor 330 and chromatic aberration as light passes through the lenses 310, 320. Chromatic aberration causes the distortion profile across the imaging plane to be shifted for each spectral channel (e.g., red, red-green, blue and blue-green channels). The sense line regions 450 between cells 410, 420 also create distortion. Referring to FIG. 5, an exemplary distortion profile across the image plane is shown. As shown, imperfections in the lenses 310, 320, 430, 440, imperfections in the image sensors 410, 420, and vignetting of the image cause the distortion profile for each illuminant to vary. In addition, spectral aberration will also cause the distortion profile for each color to be shifted with respect to the other colors. Thus, the distortion profile is also a function of the illuminant.

Referring to FIG. 6, an exemplary digital camera or camcorder, in accordance with one embodiment of the present technology, is shown. The digital camera or camcorder includes one or more lenses 610, one or more image sensor arrays 630, an analog-to-digital converter (ADC) 640, a digital signal processor (DSP) 650 and one or more computing device readable media 660. The image sensor 630 includes a two-dimension array of hundreds, thousand, millions or more of sense cells, which each convert light (e.g. photons) into electrons. The image sensor 630 may be a charge coupled device (CCD), complementary metal oxide semiconductor (CMOS) device, or the like.

The analog-to-digital converter (ADC) 140 converts the sensed intensity of photons into corresponding digital spectral data for each of a plurality of spectral channels. The light intensity sensed by the image sensor array 630 will be unevenly attenuated across the image plane and illuminants (e.g., red, green and blue light) due to imperfections in the lens 610, imperfections in the image sensor 630, vignetting effects cause by the enclosure and/or the like. Bi-cubic patch arrays in the DSP 650 apply bi-cubic (also known as Bezier) interpolation to each spectral channel (e.g., red, green-red, blue, and green-blue channels) of the spectral data to correct for image intensity distortion across the image plane and illuminant. A set of bi-cubic patches 370 are used for each spectral channel. Bi-cubic interpolation is relatively easy to implement in hardware, as compared to two-dimensional polynomials, because the surface is affine as a function of the defining control points. Alternatively, bi-cubic interpolation may be implemented in software (e.g., instructions executing on a processor such as a CPU or GPU).

Referring now to FIG. 7, a method of performing per channel image intensity correction, in accordance with one embodiment of the present technology, is shown. The method includes receiving spectral data for a given image, at 710. The data includes a separate digital intensity data for each of a plurality of spectral channels (e.g., red, green-red, blue and green-blue) across an image plane. At 720, a general two-parameter spline, such as bi-cubic interpolation, is applied to each spectral channel of data to generate corrected spectral channel data for the given image. Bi-cubic interpolation multiplies the distortion profile surface L(x,y) by a reciprocal function S(x,y) that will make it a constant k. The correction surface function, evaluated on a per parameteric coordinate basis, is a polynomial and may have up to one hundred coefficients to approximate 10 ‘wiggles’ in each dimension. Bi-cubic interpolation approximates the function using linear interpolation. The linear interpolation function is a t+(1−a)t. Bi-cubic interpolation is numerically stable, as compared to two-dimensional polynomials, and posses a variety of other desirable properties. At 730, the corrected spectral channel data for the given image is output for storage in computing device readable media, further digital signal processing, and/or the like.

Referring now to FIGS. 8A and 8B, an exemplary image plane (e.g., sensor array) divided into a plurality of patches shown. In FIG. 8A, the image plane is divided into a plurality of patches of substantially uniform patch boundaries. In FIG. 8B, the image plane is divided into a plurality of patches with adjusted boundaries. The image plane may be divided into one, two, three or more patches in each dimension. A bi-cubic Bezier interpolating patch is defined by 16(4×4) control points. A 3×3 array of bi-cubic Bezier patches is defined by 100(10×10) array of control points. It is not (4*3)^2 since the internal patch boundaries which interpolate to control point at these boundaries share these very same control points at the boundaries. The ability to adjust the patch boundaries allow for being able to have smaller patched in particular areas of the image plane where there are more interesting distortion effects, such as at the edges of the image plane. The bi-cubit Bezier spline (e.g., third power) is evaluated in each patch for each spectral channel.

Referring now to FIG. 9, a bi-cubic interpolation of an exemplary spline surface 910 is shown. The bicubic interpolation is defined by four control points (e.g., coefficients) 920-950 that define a hull 960. The control hull 960 bounds where the spline curve can go, such that the surface 910 is contained in the convex hull 960. The surface 910 starts and end sat the edge control points 920, 950 and will get close to but never go through the interior control points 930, 940. The Bezier spline surface 910 does not in general pass through the interior control points 930, 940, rather the surface 920 is “stretched” toward them as though each were an attractive force. By bounding the curve a limited number of bits can be dedicated. The curve can be evaluated by a series of linear interpolations a t+(1−a)t. The linear interpolation a t+(1−a)t is very easy to implement in hardware and/or software. They are visually intuitive, and mathematically convenient. The bicubic Bezier surfaces generally provide enough degrees of freedom for most applications.

A two-dimensional Bezier surface can be defined as a parametric surface where the position of a point S as a function of the parametric coordinates x,y is given by:

S ( x , y ) = S n i = 0 S m j = 0 B n i ( x ) B m j ( y ) k i , j
evaluated over the unit square, where

B i n ( x ) = ( n i ) x i ( 1 - x ) n - i
is a Bernstein polynomial, and

( n i ) = n ! i ! ( n - i ) !
is the binomial coefficient. Bicubic interpolation on an arbitrary sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries. If the derivatives are unkown, they may be approximated from the function values at points neighboring the corners of the unit square (e.g., using finite differences).

For each illuminant (e.g., red, green, and blue light), interpolation can be performed by sampling the entire image at many more points than coefficients (or control points) and then fitting the coefficients or control points with some fitting procedure such as linear least squares estimation. For the illuminants the interpolations are fi-invariant. Because the interpolation is fi-invariant, scaling or transforming the surface is the same as shifting and/or scaling the control points. In particular, shifting the surface is the same as shifting the control points, and scaling the surface is the same as moving the control points up or down. Therefore, as the light warms up, the coefficients do not need to be recomputed because information about the shift and scaling can be utilized. Accordingly, a calibration process may be utilized to characterize the adjustment (e.g., shift and/or scale) necessary to correct spectral data of the image.

Embodiments of the present technology are independent of the type of image sensor and can be utilized with a plurality of types of image sensor, such as Bayer arrays, an arbitrary sensor configurations including but not limited to arranging the sensor array stacked fashion or separately one for each color channel using a beam splitter, and the like. In addition, embodiment of the present technology may also be utilized in digital video cameras, as video is a series of sequential images. In addition, the camera, camcorder or image capture portion may in integrated into or attached as a peripheral device to other electronic devices such as computers, cameras, security systems and the like.

The correction per spectral channel may be performed utilizing any of a large family of spline surfaces (spline patches), such as NURB non-uniform rational B-spline, B-spline. A particular embodiment can use Bezier that can be implemented using a variety of well known techniques including recursive linear interpolation, so called de Castelijau's algorithm, or by direct application of Berstein polynomials.

The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and their equivalents.

Claims

1. A method of performing per channel image intensity correction comprising:

receiving spectral data for a given image;
applying a general two-parameter spline to each channel of the spectral data to generate corrected spectral data for the given image; and
outputting the corrected spectral data for the given image.

2. The method according to claim 1, wherein the general two-parameter spline comprises bi-cubic Bezier interpolation.

3. The method according to claim 1, wherein the general two-parameter spline comprises b-spline interpolation.

4. The method according to claim 1, wherein the general two-parameter spline comprises non-uniform rational b-spline interpolation.

5. The method according to claim 1, wherein the spectral data is distorted due to imperfection in one or more lenses.

6. The method according to claim 1, wherein the spectral data is distorted due to imperfection in one or more image sensor arrays.

7. The method according to claim 1, wherein the spectral data is distorted by chromatic aberration from one or more lenses.

8. The method according to claim 1, wherein the spectral data is distorted by vignetting effects.

9. The method according to claim 1, wherein parameter spline can be aggregated into a plurality of connected 2 parameter splines to form an array of 2 parameter splines.

10. The method according to claim 1, wherein said array of splines an be of any plurality in both the dimensions of the 2 parameter splines.

11. The method according to claim 1, the boundaries of the spline arrays are of arbitrary spacing and orientation and shape.

12. An imaging system comprising:

one or more lenses;
one or more image sensor arrays for measuring spectral data for a given image focused on the one or more image sensor arrays by the one or more lenses;
a general two-parameter spline for generating corrected spectral data for each channel of the spectral data of the given image.

13. The imaging system of claim 12, wherein the one or more image sensor arrays are divided into a plurality of adjustable patches.

14. The imaging system of claim 12, wherein the one or more image sensor arrays comprise a complimentary metal oxide semicondutor (CMOS) device array.

15. The imaging system of claim 12, wherein the one or more image sensor arrays comprise a charge coupled device (CCD) array.

16. The imaging system of claim 12, wherein the image sensor arrays comprise a Bayer sensor array.

17. The imaging system of claim 12, wherein the image sensor arrays comprise a Foveon sensor array.

18. The imaging system of claim 12, wherein the general two-parameter spline comprises a bi-cubic patch array.

19. The imaging system of claim 12, wherein the imaging system includes a digital camera.

20. The imaging system of claim 12, wherein the imaging system includes a digital camcorder.

Referenced Cited
U.S. Patent Documents
3904818 September 1975 Kovac
4253120 February 24, 1981 Levine
4646251 February 24, 1987 Hayes et al.
4685071 August 4, 1987 Lee
4739495 April 19, 1988 Levine
4771470 September 13, 1988 Geiser et al.
4920428 April 24, 1990 Lin et al.
4987496 January 22, 1991 Greivenkamp, Jr.
5175430 December 29, 1992 Enke et al.
5261029 November 9, 1993 Abi-Ezzi et al.
5305994 April 26, 1994 Matsui et al.
5387983 February 7, 1995 Sugiura et al.
5475430 December 12, 1995 Hamada et al.
5513016 April 30, 1996 Inoue
5608824 March 4, 1997 Shimizu et al.
5652621 July 29, 1997 Adams, Jr. et al.
5793433 August 11, 1998 Kim et al.
5878174 March 2, 1999 Stewart et al.
5903273 May 11, 1999 Mochizuki et al.
5905530 May 18, 1999 Yokota et al.
5995109 November 30, 1999 Goel et al.
6016474 January 18, 2000 Kim et al.
6078331 June 20, 2000 Pulli et al.
6111988 August 29, 2000 Horowitz et al.
6118547 September 12, 2000 Tanioka
6128000 October 3, 2000 Jouppi et al.
6141740 October 31, 2000 Mahalingaiah et al.
6151457 November 21, 2000 Kawamoto
6175430 January 16, 2001 Ito
6252611 June 26, 2001 Kondo
6256038 July 3, 2001 Krishnamurthy
6281931 August 28, 2001 Tsao et al.
6289103 September 11, 2001 Sako et al.
6314493 November 6, 2001 Luick
6319682 November 20, 2001 Hochman
6323934 November 27, 2001 Enomoto
6392216 May 21, 2002 Peng-Tan
6396397 May 28, 2002 Bos et al.
6438664 August 20, 2002 McGrath et al.
6469707 October 22, 2002 Voorhies
6486971 November 26, 2002 Kawamoto
6504952 January 7, 2003 Takemura et al.
6584202 June 24, 2003 Montag et al.
6594388 July 15, 2003 Gindele et al.
6683643 January 27, 2004 Takayama et al.
6707452 March 16, 2004 Veach
6724423 April 20, 2004 Sudo
6724932 April 20, 2004 Ito
6737625 May 18, 2004 Baharav et al.
6760080 July 6, 2004 Moddel et al.
6785814 August 31, 2004 Usami et al.
6806452 October 19, 2004 Bos et al.
6839062 January 4, 2005 Aronson et al.
6856441 February 15, 2005 Zhang et al.
6891543 May 10, 2005 Wyatt
6900836 May 31, 2005 Hamilton, Jr.
6950099 September 27, 2005 Stollnitz et al.
7009639 March 7, 2006 Une et al.
7015909 March 21, 2006 Morgan, III et al.
7023479 April 4, 2006 Hiramatsu et al.
7088388 August 8, 2006 MacLean et al.
7092018 August 15, 2006 Watanabe
7106368 September 12, 2006 Daiku et al.
7133041 November 7, 2006 Kaufman et al.
7133072 November 7, 2006 Harada
7146041 December 5, 2006 Takahashi
7221779 May 22, 2007 Kawakami et al.
7227586 June 5, 2007 Finlayson et al.
7245319 July 17, 2007 Enomoto
7305148 December 4, 2007 Spampinato et al.
7343040 March 11, 2008 Chanas
7486844 February 3, 2009 Chang et al.
7502505 March 10, 2009 Malvar et al.
7580070 August 25, 2009 Yanof et al.
7626612 December 1, 2009 John et al.
7627193 December 1, 2009 Alon et al.
7671910 March 2, 2010 Lee
7728880 June 1, 2010 Hung et al.
7750956 July 6, 2010 Wloka
7817187 October 19, 2010 Silsby et al.
7859568 December 28, 2010 Shimano et al.
7860382 December 28, 2010 Grip
7912279 March 22, 2011 Hsu et al.
8049789 November 1, 2011 Innocent
8238695 August 7, 2012 Davey et al.
8456547 June 4, 2013 Wloka
8456548 June 4, 2013 Wloka
8456549 June 4, 2013 Wloka
8471852 June 25, 2013 Bunnell
20010001234 May 17, 2001 Addy et al.
20010012113 August 9, 2001 Yoshizawa et al.
20010012127 August 9, 2001 Fukuda et al.
20010015821 August 23, 2001 Namizuka et al.
20010019429 September 6, 2001 Oteki et al.
20010021278 September 13, 2001 Fukuda et al.
20010033410 October 25, 2001 Helsel et al.
20010050778 December 13, 2001 Fukuda et al.
20010054126 December 20, 2001 Fukuda et al.
20020012131 January 31, 2002 Oteki et al.
20020015111 February 7, 2002 Harada
20020018244 February 14, 2002 Namizuka et al.
20020027670 March 7, 2002 Takahashi et al.
20020033887 March 21, 2002 Hieda et al.
20020041383 April 11, 2002 Lewis, Jr. et al.
20020044778 April 18, 2002 Suzuki
20020054374 May 9, 2002 Inoue et al.
20020063802 May 30, 2002 Gullichsen et al.
20020105579 August 8, 2002 Levine et al.
20020126210 September 12, 2002 Shinohara et al.
20020146136 October 10, 2002 Carter, Jr.
20020149683 October 17, 2002 Post
20020158971 October 31, 2002 Daiku et al.
20020167202 November 14, 2002 Pfalzgraf
20020167602 November 14, 2002 Nguyen
20020191694 December 19, 2002 Ohyama et al.
20020196470 December 26, 2002 Kawamoto et al.
20030035100 February 20, 2003 Dimsdale et al.
20030067461 April 10, 2003 Fletcher et al.
20030122825 July 3, 2003 Kawamoto
20030142222 July 31, 2003 Hordley
20030146975 August 7, 2003 Joung et al.
20030169353 September 11, 2003 Keshet et al.
20030169918 September 11, 2003 Sogawa
20030197701 October 23, 2003 Teodosiadis et al.
20030218672 November 27, 2003 Zhang et al.
20030222995 December 4, 2003 Kaplinsky et al.
20030223007 December 4, 2003 Takane
20040001061 January 1, 2004 Stollnitz et al.
20040001234 January 1, 2004 Curry et al.
20040032516 February 19, 2004 Kakarala
20040066970 April 8, 2004 Matsugu
20040100588 May 27, 2004 Hartson et al.
20040101313 May 27, 2004 Akiyama
20040109069 June 10, 2004 Kaplinsky et al.
20040189875 September 30, 2004 Zhai et al.
20040218071 November 4, 2004 Chauville
20040247196 December 9, 2004 Chanas et al.
20050007378 January 13, 2005 Grove
20050007477 January 13, 2005 Ahiska
20050030395 February 10, 2005 Hattori
20050046704 March 3, 2005 Kinoshita
20050099418 May 12, 2005 Cabral et al.
20050111110 May 26, 2005 Matama
20050175257 August 11, 2005 Kuroki
20050185058 August 25, 2005 Sablak
20050238225 October 27, 2005 Jo et al.
20050243181 November 3, 2005 Castello et al.
20050248671 November 10, 2005 Schweng
20050261849 November 24, 2005 Kochi et al.
20050286097 December 29, 2005 Hung et al.
20060050158 March 9, 2006 Irie
20060061658 March 23, 2006 Faulkner et al.
20060087509 April 27, 2006 Ebert et al.
20060119710 June 8, 2006 Ben-Ezra et al.
20060133697 June 22, 2006 Uvarov
20060176375 August 10, 2006 Hwang et al.
20060197664 September 7, 2006 Zhang et al.
20060274171 December 7, 2006 Wang
20060290794 December 28, 2006 Bergman et al.
20060293089 December 28, 2006 Herberger et al.
20070091188 April 26, 2007 Chen et al.
20070147706 June 28, 2007 Sasaki et al.
20070171288 July 26, 2007 Inoue et al.
20070236770 October 11, 2007 Doherty et al.
20070247532 October 25, 2007 Sasaki
20070285530 December 13, 2007 Kim et al.
20080030587 February 7, 2008 Helbing
20080043024 February 21, 2008 Schiwietz et al.
20080062164 March 13, 2008 Bassi et al.
20080101690 May 1, 2008 Hsu et al.
20080143844 June 19, 2008 Innocent
20080231726 September 25, 2008 John
20090002517 January 1, 2009 Yokomitsu et al.
20090010539 January 8, 2009 Guarnera et al.
20090037774 February 5, 2009 Rideout et al.
20090116750 May 7, 2009 Lee et al.
20090128575 May 21, 2009 Liao et al.
20090160957 June 25, 2009 Deng et al.
20090257677 October 15, 2009 Cabral et al.
20100266201 October 21, 2010 Cabral et al.
Foreign Patent Documents
1275870 December 2000 CN
0392565 October 1990 EP
1449169 May 2003 EP
1378790 July 2004 EP
1447977 August 2004 EP
1550980 July 2005 EP
2045026 October 1980 GB
2363018 December 2001 GB
61187467 August 1986 JP
62-151978 July 1987 JP
07-015631 January 1995 JP
8036640 February 1996 JP
08-079622 March 1996 JP
2000516752 December 2000 JP
2000516752 December 2000 JP
2001-052194 February 2001 JP
2003-085542 March 2002 JP
2002-207242 July 2002 JP
2004-221838 August 2004 JP
2005094048 April 2005 JP
2005-182785 July 2005 JP
2005520442 July 2005 JP
2006025005 January 2006 JP
2006086822 March 2006 JP
2006-094494 April 2006 JP
2006-121612 May 2006 JP
2006-134157 May 2006 JP
2007019959 January 2007 JP
2007-148500 June 2007 JP
2007-233833 September 2007 JP
2007282158 October 2007 JP
2008-085388 April 2008 JP
2008113416 May 2008 JP
2008113416 May 2008 JP
2008-277926 November 2008 JP
2009021962 January 2009 JP
10-2004-0043156 May 2004 KR
1020060068497 June 2006 KR
1020070004202 January 2007 KR
03043308 May 2003 WO
2004063989 July 2004 WO
2007056459 May 2007 WO
WO2007/093864 August 2007 WO
Other references
  • D. Doo, M. Sabin, “Behaviour of Recursive Division Surfaces Near Extraordinary Points”, Sep. 1978; Computer Aided Design; vol. 10; pp. 356-360.
  • D. W. H. Doo, “A Subdivision Algorithm for Smoothing Down Irregular Shaped Polyhedrons”, 1978; Interactive Techniques in Computer Aided Design; pp. 157-165.
  • Davis, J., Marschner, S., Garr, M., Levoy, M., Filling Holes in Complex Surfaces Using Volumetric Diffusion, Dec. 2001, Stanford University, pp. 1-9.
  • E. Catmull, J. Clark, “Recursively Generated B-Spline Surfaces on Arbitrary Topological Meshes”, Nov. 1978I Computer Aided Design; vol. 10; pp. 350-355.
  • J. Bolz, P. Schroder, Rapid Evaluation of Catmull-Clark Subdivision Surfaces:, Web 3D '02.
  • J. Stam, “Exact Evaluation of Catmull-Clark Subdivision Surfaces At Arbitrary Parameter Values”, Jul. 1998; Computer Graphics; vol. 32; pp. 395-404.
  • Krus, M., Bourdot, P., Osorio, A., Guisnel, F., Thibault, G.; “Adaptive Tessellation of Connected Primitives for Interactive Walkthroughs in Complex Industrial Virtual Environments”, Jun. 1999, Proceedings of the Eurographics Workshop, pp. 1-10.
  • Kumar, S., Manocha, D., “Interactive Display of Large Scale Trimmed NURBS Models”, 1994, University of North Carolina at Chapel Hill, Technical Report, pp. 1-36.
  • Loop, C., DeRose, T., “Generalized B-Spline Surfaces of Arbitrary Topology”, Aug. 1990, Sigraph 90; pp. 347-356.
  • M. Halstead, M. Kass, T. DeRose, “Efficient, Fair Interpoloation Using Catmull-Clark Surfaces”, Sep. 1993; Computer Graphics and Interactive Techniques, Proc; pp. 35-44.
  • T. DeRose, M. Kass, T. Truong; “Subdivision Surfaces in Character Animation”, Jul. 1998; Computer Graphics and Interactive Techniques Proc, pp. 85-94.
  • Takeuchi, S., Kanai, T., Suzuki, H., Shimada, K., Kimura, F., “Subdivision Surface Fitting With QEM-Based Mesh SImplification and Reconstruction of Approximated B-Spline Surfaces”, 2000, Eighth Pacific Conference on Computer Graphics and Applicaitons, pp. 202-212.
  • “A Pipelined Architecture for Real-Time Correction of Barrel Distortion in Wide-Angle Camera Images”, Hau, T. Ngo, Student Member, IEEE and Vijayan K. Asari, Senior Member IEEE, IEEE Transaction on Circuits and Systems for Video Technology: vol. 15 No. 3 Mar. 2005 pp. 436-444.
  • “Calibration and removal of lateral chromatic aberration in images” Mallon, et al. Science Direct Copyright 2006; 11 pages.
  • “Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Seperation” Weerasighe, et al Visual Information Processing Lab, Motorola Austrailan Research Center pp. IV-3233-IV3236, 2002.
  • http://Slashdot.org/articles/07/09/06/1431217.html.
  • http:englishrussia.com/?p=1377 unknown date.
  • Kuno et al. “New Interpolation Method Using Discriminated Color Correlation for Digital Still Cameras” IEEE Transac. On Consumer Electronics, vol. 45, No. 1, Feb. 1999, pp. 259-267.
  • gDEBugger, graphicRemedy, http://www.grennedy.com, Aug. 8, 2006, pp. 1-18.
  • Parhami, Computer Arithmetic, Oxford University Press, Jun. 2000, pp. 413-418.
  • Duca et al., “A Relational Debugging Engine for Graphics Pipeline, International Conference on Computer Graphics and Interactive Techniques”, ACM SIGGRAPH Jul. 2005, pp. 453-463.
  • Keith R. Slavin; Application as Filed entitled “Efficient Method for Reducing Noise and Blur in a Composite Still Image From a Rolling Shutter Camera”; U.S. Appl. No. 12/069,669, filed Feb. 11, 2008.
  • Donald D. Spencer, “Illustrated Computer Graphics Dictionary”, 1993, Camelot Publishing Company, p. 272.
  • http://en.wikipedia.org/wiki/Bayerfilter; “Bayer Filter”; Wikipedia, the free encyclopedia; pp. 1-4.
  • http://en.wikipedia.org/wiki/Colorfilterarray; “Color Filter Array”; Wikipedia, the free encyclopedia; pp. 1-5.
  • http://en.wikipedia.org/wiki/Colorspace; “Color Space”; Wikipedia, the free encyclopedia; pp. 1-4.
  • http://en.wikipedia.org/wiki/Colortranslation; “Color Management”; Wikipedia, the free encyclopedia; pp. 1-4.
  • http://en.wikipedia.org/wiki/Demosaicing; “Demosaicing”; Wikipedia, the free encyclopedia; pp. 1-5.
  • http://en.wikipedia.org/wiki/Halftone; “Halftone”; Wikipedia, the free encyclopedia; pp. 1-5.
  • http://en.wikipedia.org/wiki/L*a*b*; “Lab Color Space”; Wikipedia, the free encyclopedia; pp. 1-4.
  • Ko et al., “Fast Digital Image Stabilizer Based on Gray-Coded Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 45, No. 3, pp. 598-603, Aug. 1999.
  • Ko, et al., “Digital Image Stabilizing Algorithms Basd on Bit-Plane Matching”, IEEE Transactions on Consumer Electronics, vol. 44, No. 3, pp. 617-622, Aug. 1988.
  • Morimoto et al., “Fast Electronic Digital Image Stabilization for Off-Road Navigation”, Computer Vision Laboratory, Center for Automated Research University of Maryland, Real-Time Imaging, vol. 2, pp. 285-296, 1996.
  • Paik et al., “An Adaptive Motion Decision system for Digital Image Stabilizer Based on Edge Pattern Matching”, IEEE Transactions on Consumer Electronics, vol. 38, No. 3, pp. 607-616, Aug. 1992.
  • S. Erturk, “Digital Image Stabilization with Sub-Image Phase Correlation Based Global Motion Estimation”, IEEE Transactions on Consumer Electronics, vol. 49, No. 4, pp. 1320-1325, Nov. 2003.
  • S. Erturk, “Real-Time Digital Image Stabilization Using Kalman Filters”, http://www,ideallibrary.com, Real-Time Imaging 8, pp. 317-328, 2002.
  • Uomori et al., “Automatic Image Stabilizing System by Full-Digital Signal Processing”, vol. 36, No. 3, pp. 510-519, Aug. 1990.
  • Uomori et al., “Electronic Image Stabiliztion System For Video Cameras And VCRS”, J. Soc. Motion Pict. Telev. Eng., vol. 101, pp. 66-75, 1992.
  • Weerasinghe et al.; “Method of Color Interpolation in a Single Sensor Color Camera Using Green Channel Separation”; Visual Information Proessing lab, Motorola Australian Research Center; IV 3233-IV3236.
  • Goshtasby, Ardeshir, “Correction of Image Distortion From Lens Distortion Using Bezier Patches”, 1989, Computer Vision, Graphics and Image Processing, vol. 47, pp. 358-394.
Patent History
Patent number: 9379156
Type: Grant
Filed: Apr 10, 2008
Date of Patent: Jun 28, 2016
Patent Publication Number: 20090257677
Assignee: NVIDIA CORPORATION (Santa Clara, CA)
Inventors: Brian Cabral (San Jose, CA), Hu He (Santa Clara, CA), Elena Ing (Santa Clara, CA), Sohei Takemoto (Fremont, CA)
Primary Examiner: Anh Hong Do
Application Number: 12/101,142
Classifications
Current U.S. Class: Details Of Luminance Signal Formation In Color Camera (348/234)
International Classification: G06K 9/40 (20060101); H01L 27/146 (20060101);