OPTICAL DISTORTION CALIBRATION FOR ELECTRO-OPTICAL SENSORS
Optical distortion calibration for an Electro-Optical sensor in a chamber eliminates calibration of the mirror controller and allows for calibration while the target is in motion across the FOV thus providing a more efficient and accurate calibration. A target pattern is projected through sensor optics with line of sight motion across the sensor FOV to generate a sequence of frames. Knowing that the true distances between the same targets remain constant with line of sight motion across the sensor's FOV, coefficients of a function F representative of the non-linear distortion in the sensor optics are fit from observed target positions in a subset of frames to true line of sight so that distances between targets are preserved as the pattern moves across the FOV. The coefficients are stored as calibration terms with the sensor.
This application claims benefit of priority under 35 U.S.C. 120 as a divisional application of co-pending U.S. patent application Ser. No. 12/014,266 entitled “OPTICAL DISTORTION CALIBRATION FOR ELECTRO-OPTICAL SENSORS” and filed Jan. 15, 2008, the entire contents of which is incorporated by reference.
GOVERNMENT LICENSE RIGHTSThe U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract HQ0006-01-C-0001/101616 awarded by the Ballistic Missile Defense Organization awarded by DARPA-DSO.
BACKGROUND OF THE INVENTION1. Field of the Invention
This invention relates to optical distortion calibration for electro-optical sensors.
2. Description of the Related Art
All imaging systems have some amount of distortion attributable to their optics. Non-linear distortion causes a fixed angular displacement between points in image space to appear to change as the points move across the image. In other words, the observed line-of-sight (LOS) positions are warped. Common types of distortion include pincushion or barrel distortion. Some applications, notably precision stadiometry, require that camera distortions be precisely calibrated, so that measurements may be post-compensated. Calibration is markedly more difficult in systems where the required precision or other conditions, such as operation in cryo vacuum conditions, make it impractical to project precision collimated patterns that fill the sensor's entire field of view (FOV) necessitating that a smaller pattern be scanned across the FOV.
The current approach used to calibrate electro-optic (EO) sensors in a cryo vacuum chamber is time-consuming, expensive and limited in accuracy. A Theodolite is placed looking through a window in the cryo chamber, in place of the sensor and sensor optics. A single point target is projected through a collimator and moved in discrete steps across the FOV using a folding mirror. The mirror must stop at each point to allow the Theodolite to observe the actual position of the target in response to a command issued by a mirror controller. The mirror controller is than calibrated by computing a mirror transformation that converts the observed mirror positions to truth. The Theodolite and window are removed and the EO sensor and optics are placed in the test chamber. The mirror is moved to sweep the target across the FOV but again must stop at each point so that the mirror readouts can be synchronized to each image. The target position in each image is also measured. The mirror transformation is applied to the mirror position to remove that source of error and provide a calibrated line-of-sight truth position for each mirror position. The distortion correction function is calculated, generally as a 2nd order polynomial fit, to map the measured FOV position of the target in each frame to the calibrated line-of-sight truth position for each corresponding mirror position. The fit provides the coefficients required to post-compensate sensed images due to the non-linear distortion induced by the sensor optics. The steps of calibrating the mirror controller and having to stop at each mirror position to observe the target position are the primary limitations on cost, calibration time and accuracy.
SUMMARY OF THE INVENTIONThe present invention provides for performing optical distortion calibration for an EO sensor in a chamber that eliminates calibration of the mirror controller by eliminating the need to use sensed mirror positions, and thus allows for calibration while the target is in motion across the FOV providing a more efficient and accurate calibration.
This is accomplished by projecting a target pattern through sensor optics with line of sight motion across the sensor FOV to generate a sequence of frames. Knowing that the true distances (angular or spatial for a known focal length) between the same targets remain constant with line of sight motion across the sensor's FOV, coefficients of a function F (representative of the non-linear distortion in the sensor optics) are fit from observed target positions in a subset of frames to true line of sight so that distances between targets are preserved as the pattern moves across the FOV. The coefficients are stored as calibration terms with the sensor.
In an embodiment, a target pattern is projected through sensor optics with line of sight motion across the sensor FOV to generate a sequence of frames. The positions of a reference and a plurality of targets in the sensor FOV are measured for a plurality of frames. A function F representative of the distortion the sensor optics is applied to the observed target and reference positions to provide corrected target and reference positions. A corrected difference position is constructed as the difference between the corrected target position and a corrected reference position. Coefficients for the function F are fit using the observed target and reference positions over a subset of the targets and frames to minimize the scale-normalized variability of the corrected difference position as the targets move across the FOV.
In a first approach, the fit algorithm minimizes the variance of the corrected difference position subject to a constraint that the mean across frames and targets of the magnitude of the corrected difference position in one or more axes match a known average for the target pattern. This approach has the advantage that only gross statistics of the target pattern are required.
In a second approach, the fit algorithm minimizes the expected value of the norm of the squared difference between the corrected difference position and the true difference position for each target. This approach generally provides more accurate coefficients, but requires specific knowledge of the target positions in the target pattern and requires matching observed target positions to true target position to perform the minimization.
These and other features and advantages of the invention will be apparent to those skilled in the art from the following detailed description of preferred embodiments, taken together with the accompanying drawings, in which:
The present invention provides for performing optical distortion calibration for a flight sensor in a chamber that eliminates calibration of the mirror controller and allows for calibration while the target is in motion across the FOV thus providing a more efficient and accurate calibration.
An exemplary test setup 10 and calibration procedure are illustrated in
The sequence of frames 34 are passed to a computer 36 that is programmed with instructions to first observe positions of the targets in the frames (step 38) and then, knowing that the true distances between the same targets remain constant with line of sight motion across the sensor's FOV, fit coefficients of a function F (representative of the non-linear distortion in the sensor optics) from the observed target positions to true line of sight so that distances between targets are preserved as the pattern moves across the FOV (step 40). In most cases such as target detection function F transforms observed data to truth to correct for distortion errors. In other applications, such as mapping predicted inertial-frame target positions into sensor coordinates, the function F transforms truth to observed. Function F may, for example, be piecewise linear, quadratic, cubic etc. The fit is generally performed on a subset of the targets and a subset of the frames where the subset could conceivably include all targets and frames or something less. Because mirror position cancels out in this fit, calibration of the mirror controller, and even measurement of the mirror positions, is not required; hence the mirror does not have to stop at each frame. Elimination of these two steps both improves the precision of the calibration terms and reduces the resources and time required to perform the calibration. Our calibration process does not provide the X and Y offset terms. However, these terms are typically discarded and then provided by a subsequent more accurate calibration step for overall alignment.
The coefficients are stored (step 42) as calibration terms 14 in tangible medium 15 e.g. memory, for EO sensor 13 in flight sensor 12. Once the flight sensor 12 is put into the field on, for example, a space vehicle, the calibration terms are used to post-compensate images captured by the sensor to remove or at least reduce the non-linear distortion effects. As illustrated in
As shown in
Target pattern 18 includes a plurality of targets 20 on a background. The target is typically formed by placing a cold shield having the desired hold pattern in front of a thermal or light source. The pattern suitably includes multiple targets 20 arranged along two or more axis (e.g. x and y axes) in order to extract coefficients along both axes. The targets may be unresolved (<1 pixel wide) or resolved (>1 pixel wide) when projected onto the sensor. Resolved targets (>1 pixel wide) minimize position observation errors due to sub-pixel phase at the cost of increasing the necessary spacing between targets and filtering the measured distortion function (the measured function becomes an average of the point distortion over the extent of the target). In certain cases of the fit algorithm, it is necessary to determine which observed target corresponds to which true target. Accordingly, the targets are suitably arranged in an irregular pattern so that for each frame there is a unique match of the observed targets to the target pattern regardless of scale, rotation or offset provided that at least a determined portion e.g. at least 25%, of the target pattern remains in the FOV.
To perform the fit, each frame (that is processed) must have at least one reference 64 against which the observed target positions are compared. The reference may be implicit or explicit. An example of an implicit reference would be a subset of the target positions themselves, in which case each observed target position would be compared (differenced) with each target position in the subset and the fit performed on these differences. An explicit reference could be some function of a subset of targets or a separate object and represents the gross shift of a frame across the FOV. For example, a single target, a centroid of multiple targets, or a boundary of multiple targets could be the reference. Alternately, a plurality of slits or other resolved objects could form the reference. Although the reference is typically the same in each frame, it could be computed differently if desired. In edge frames a portion of the reference may be missing. This can be compensated for by making sure that the measured position for the observed portion of the reference remains consistent.
To illustrate the principles of the fitting algorithm let us assume without loss of generality that the reference 64 is a single target and consider the effects of non-linear distortion on a single target 20 as illustrated in
The same principle is illustrated in more detail in
An embodiment of the calibration procedure is illustrated in
Corrected observed target and reference positions are represented as follows (step 104):
Pobsn,m is the observed position of target n in frame m where the mirror position generally changes from frame to frame.
Pcorn,m=F(Pobsn,m) is the position of target n in frame m corrected into “truth” coordinates, where the coefficients of function F are determined by the fit algorithm. In most cases F is calculated as a polynomial. For example a 2nd order polynomial F(Xn,m, Yn,m)=C20Xn,m2+C02Yn,m2+C11Xn,mYn,m+C10Xn,m+C01Yn,m+C00.
Pobs0,m is the observed position of the reference in frame m. For simplicity a single reference object or average of object “O” is assumed.
Pcor0,m=F(Pobs0,m) is the position of the reference in frame m corrected into “truth” coordinates. Note, in general F( )is applied to the observed reference position, which may constitute application to the position of a single reference, the centroid of multiple references. around the integrated boundary of the reference(s), to the intensity profile along the length of the reference(s) in an orthogonal axis, or other depending upon the nature of the reference.
From this one may define a corrected difference position Dcorn,m=Pcorn,m−Pcor0,m (step 106). The corrected difference positions do, in general, vary with frame number [m]. A fit algorithm minimizes the scale-normalized variability of the corrected difference position Dcorn,m=Pcorn,m−Pcor0,m over a subset of targets n and frames m using, for example, a least squares fit to find the coefficients of function F (step 108). The ‘scale-normalized’ constraint is necessary to prevent the fit algorithm from collapsing the scale to zero by setting all the coefficients to zero where everything maps to one point and there is no variance.
The fit algorithm can minimize the scale-normalized variability in a number of ways. In a first approach, the fit algorithm minimizes the variance of Dcorn,m subject to a constraint that the mean across frames and targets of the magnitude of the corrected difference position in one or more axes matches a known average difference position magnitude in that axis for the target pattern. This approach has the advantage that only gross statistics of the target pattern are required. This can be expressed as:
Minimize VARm(Dcorn,m)
Subject to En,m(|Dcorn,m·xk|)=En(Dcorn·xk|)
Where E( )is the expected value operator and (·xk) is the projection onto an axis k.
In a second approach, the fit algorithm minimizes the expected value of the norm of the squared difference of the corrected difference position and the true difference position for each target. This approach should provide more accurate coefficients but requires specific knowledge of the target positions in the target pattern and requires matching observed target positions to true target position to perform the minimization. The matching is readily accomplished because the irregular target pattern provides for a 1-to-1 mapping of observed target points to true target points regardless of scale, rotation or offset provided at least a predetermined section e.g. 25% of the target is imaged. This can be expressed as:
Minimize Em(Norm(Dcorn,m−Dtruen)2
Subject to Em(Dcorn,m)=Dtruen
where Dtruen=Ptruen,m−Ptrue0,m. Ptruen,m is the true position in collimated space based on the current mirror position of target n in frame m and Ptrue0,m is the true position in collimated space based on the current mirror position of reference O in frame m. In this method Ptruen,m is never directly used, only Dtruen=Ptruen,m−Ptrue0,m. Note that Dtruen,0 is not a function of m. Or, equivalently stated, the offset between targets in the collimated space is not a function of the overall displacement of the pattern (i.e., of the mirror position).
As illustrated in
While several illustrative embodiments of the invention have been shown and described, numerous variations and alternate embodiments will occur to those skilled in the art. Such variations and alternate embodiments are contemplated, and can be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims
1. A method of performing distortion calibration for an electro-optical sensor, comprising:
- projecting and scanning a physical target pattern having a plurality of physical targets through sensor optics that introduce non-linear distortion to a projected target pattern with a line of sight motion across a sensor FOV;
- observing multiple target positions in the sensor FOV in a plurality of frames; and
- subject to a constraint that the true angular distances between the same targets in the target pattern remain constant with line of sight motion, fitting coefficients for a function F representative of the distortion in the sensor optics between observed target positions and true target positions so that distances between true target positions are approximately preserved as the pattern moves; and
- storing the coefficients as calibration terms in a tangible medium.
2. The method of claim 1, wherein the function F is representative of the distortion in the sensor optics from observed target positions to true line of sight.
3. The method of claim 1, wherein the function F is representative of the distortion in the sensor optics from the true line of sight for a plurality of targets to the observed positions of those targets in a plurality of input frames.
4. The method of claim 1, further comprising:
- observing a position of a reference in each of said plurality of frames; and
- using the position of the reference in each frame to establish a unique match of the observed target positions to the true target positions.
5. The method of claim 1, wherein the coefficients are fit by:
- observing a position of a reference in said frames;
- applying the function F to the observed target and reference positions to provide corrected target and reference positions;
- representing a corrected difference positions for a plurality of the targets as the a difference between the corrected target positions and a the corrected reference position; and
- fitting coefficients for the function F using the observed target and reference positions over a subset of the targets and frames to minimize the scale-normalized variability of the corrected difference positions as the targets move across the FOV.
6. The method of claim 1, wherein a plurality of said targets are less than one pixel wide in all axes.
7. The method of claim 1, wherein at least a plurality of said targets are greater than one pixel wide in one or more axes.
8. The method of claim 1, wherein said reference constitutes a subset of said plurality of targets.
9. The method of claim 1, wherein said reference varies over the frames.
10. A method of performing distortion calibration for an electro-optical sensor, comprising:
- projecting and scanning a target pattern having a plurality of targets through sensor optics with a line of sight motion across a sensor FOV;
- observing multiple target positions in the sensor FOV in a plurality of frames; and
- subject to a constraint that the true angular distances between the same targets in the target pattern remain constant with line of sight motion, fitting coefficients for a function F representative of the distortion in the sensor optics from observed target positions to true line of sight so that distances between targets are approximately preserved as the pattern moves; and
- storing the coefficients as calibration terms in a tangible medium.
11. The method of claim 8, wherein the coefficients are fit by:
- observing a position of a reference in said frames;
- applying a function F representative of the distortion the sensor optics to the observed target and reference positions to provide corrected target and reference positions;
- representing a corrected difference position for a plurality of the targets as the difference between the corrected target position and a corrected reference position; and
- fitting coefficients for the function F using the observed target and reference positions over a subset of the targets and frames to minimize the scale-normalized variability of the corrected difference position as the targets move across the FOV.
12. A test system for performing distortion calibration for an electro-optical sensor, comprising:
- a vacuum test chamber including, a physical target pattern having a plurality of physical targets; a flight sensor including an electro-optical sensor, a tangible medium and sensor optics; a collimating lens that projects the target pattern so that the pattern may be shifted in a field-of-view (FOV) measured by the sensor while preserving the relative positions of the targets; and a scanning mirror that directs the target pattern through the sensor optics with a line of sight motion across the sensor's FOV over a plurality of frames, said optics' introducing to the projected target pattern producing apparent differences in the relative positions of the targets as the target pattern moves; and
- a computer configured to observe multiple target positions in the sensor FOV in each of a plurality of frames, and subject to a constraint that the true angular distances between the same targets in the target pattern remain constant with line of sight motion, fit coefficients for a function F representative of the distortion in the sensor optics between observed target positions and true target positions so that distances between true target positions are approximately preserved as the pattern moves, and store the coefficients in the tangible medium as calibration terms for the sensor.
13. The test system of claim 12, wherein the function F is representative of the distortion in the sensor optics from observed target positions to true line of sight.
14. The test system of claim 12, wherein the function F is representative of the distortion in the sensor optics from the true line of sight for a plurality of targets to the observed positions of those targets in a plurality of input frames.
15. The test system of claim 12, wherein the physical target pattern includes a reference, wherein said computer observes a position of a reference in each of said plurality of frames and uses the position of the reference in each frame to establish a unique match of the observed target positions to the true target positions.
Type: Application
Filed: Feb 5, 2011
Publication Date: May 26, 2011
Inventors: Darin S. Williams (Tucson, AZ), Jodean D. Wendt (Tucson, AZ)
Application Number: 13/021,729
International Classification: G06F 19/00 (20110101); G01B 21/00 (20060101);