METHOD FOR CALIBRATING A CAMERA
A method calibrates a camera having image sensor having a center position. An image view projected onto the image sensor by a lens is captured. At least one image view boundary of the captured image view is detected. A projection center position corresponding to a center of the projected image view is determined in at least one dimension. Offset between the projection center position and a sensor center position is determined, defined in at least one dimension, corresponding to the center of the image sensor capturing the projected image view. The image sensor is moved in relation to the lens based on the offset in order to arrive at a substantially zero offset in at least one dimension between the center of image sensor and the center of the projected image view.
Latest AXIS AB Patents:
- METHOD AND SYSTEM FOR DETECTING A CHANGE OF RATIO OF OCCLUSION OF A TRACKED OBJECT
- METHOD AND ENCODER FOR ENCODING LIDAR DATA
- SYSTEM AND METHOD FOR BACKGROUND MODELLING FOR A VIDEO STREAM
- CAMERA ARRANGEMENT COMPRISING A HOLDER CONFIGURATION FOR A CAMERA HEAD AND THE HOLDER CONFIGURATION
- ENHANCEMENT VIDEO CODING FOR VIDEO MONITORING APPLICATIONS
This application claims the benefit of U.S. Provisional Application No. 61/621,181 filed Apr. 6, 2012, and European Patent Application No. 12162107.2 filed Mar. 29, 2012, which are incorporated by reference as if fully set forth.
FIELD OF INVENTIONThe present invention relates to a method for calibrating a camera and in particular a method for calibrating a pan-tilt enabled camera system in order to account for misalignments in the camera system
BACKGROUNDIt is often difficult to move camera heads, e.g. pan and tilt enabled camera heads, with precision to a general position in a range of accessible pan tilt positions. Additional problems with the precision arise when the camera is to provide a view of an interpreted or transformed selection selected in a visual interface.
The precision becomes even worse in a system where you have a camera head acquiring images through a lens, e.g., a wide angle lens, not mounted on the camera head but rather fixedly attached to a support structure of the camera head. Hence in these cases, the precision of the mounting of the lens and the support structure to which it is attached is critical. In addition to the problem of mounting the lens with precision, the camera may be shipped without having the lens or the support structure mounted together with the camera head, and in such case, the precision mounting has to be performed in the field. Accordingly, these types of systems often present problems with precision in positioning.
SUMMARYA method is described to improve the precision in positioning a camera for acquiring a desired image view.
In particular, according to one aspect, a method for calibrating a camera with an image sensor having a center position includes capturing an image view projected onto the image sensor by a lens, the projected image view having a center, detecting at least one image view boundary of the image view captured by the image sensor. The method further includes determining, in at least one dimension, a projection center position corresponding to the center of the projected image view on the image sensor based on detected boundary, determining offset between the projection center position and a sensor center position, defined in at least one dimension, corresponding to the center of the image sensor capturing the projected image view, and moving the image sensor in relation to the lens based on the offset in order to arrive at a substantially zero offset in at least one dimension between the center of image sensor and the center of the projected image view. By detecting the image view boundary and determining the center of the image view from this detection, the act of determining offset is facilitated. Moreover, this enables capturing of maximum image information in that it enables maximum utilization of the image sensor. Further, by aligning the optical axis and the center of the image sensor pan tilt positioning will be more precise, especially when it comes to converting pixel positions in a displayed view to pan and tilt angles.
In another embodiment, the method further comprises performing an additional capture of the image view projected onto the image sensor at a point in time after the moving of the sensor, detecting at least one image view boundary relating to the latter captured image view projected onto the image sensor, determining, in at least one dimension, an updated projection center position corresponding to the center of the projected image view on the image sensor based on latter detected boundary, determining offset between the updated projection center position and the sensor center position, defined in at least one dimension, corresponding to the center of the image sensor capturing the projected image view, and if the offset is zero or substantially zero, then storing the position of the updated projection center, the position being defined relating to at least one dimension including and including a dimension different from the dimension used for determining offset between the between the updated projection center position and the sensor center position. By checking the image view center once more after the initial moving of the image sensor due to offset between the image sensor center and the image view sensor, it is possible to make a high precision centering of the image view onto the image sensor and/or a less precise and less advanced method for calculating the moving of the image sensor may be required and hence simplifying the process.
According to another embodiment, the image view is projected onto the image sensor through a wide angle lens which is producing a circular image or at least a substantially circular image.
According to a further embodiment, the determining of the position of the center of the projected image view includes calculating parameters defining a circle that are likely to represent the at least one detected edge relating to the projected image view.
In yet another embodiment, the calculating of parameters defining a circle that are likely to represent the at least one detected edge relating to the projected image view is based on a Hough transform.
In one embodiment, the camera comprises a pan and tilt enabled camera head including the image sensor, wherein the image view is projected onto the image sensor through a lens fixedly arranged in relation to a base of the camera, and wherein the moving of the image sensor includes moving of the camera head. Thereby enabling a relatively simple design of the camera and in particular the camera head as the sensor movements may be accomplished by moving the entire camera head using pan tilt functionality of the camera.
In a further embodiment, the moving of the camera head is performed as pan and/or tilt movements calculated from the offset between the center of projected image view and center of the image sensor capturing the projected image view.
According to another aspect, a method for calibrating a camera comprises a) the steps of the method described above, b) selecting a position in an overview image view captured by the camera and presented to the operator, c) calculating a pan angle and a tilt angle corresponding to the selected position, d) moving the camera head to a position in which it may capture a detailed image view, not through the wide angle lens, having a center representing the calculated pan angle and tilt angle, e) adjusting the camera head, after the camera head has moved to the position, until the position selected in the overview image is centered in the detailed view, e) saving data representing the amount of adjustment used in for centering the position in detailed view, repeating the steps a-f until a predetermined amount of data relating to the adjustment has been saved, and estimating an error function based on the saved data. By sampling the error in transforming a selection in an overview image view to pan and tilt angles, the precision of the transformation and the resulting transition from overview to detailed view may be increased. The estimation of an error function from the error samples may then allow for a less cumbersome process of getting enough error values in that the number of required error values will decrease without resulting in substantially less precision.
According to one embodiment, the act of estimating an error function includes determining coefficients of a polynomial error function using Linear Least Square Estimation.
According to yet another aspect, a method for transforming a selected position in an overview image captured by a camera to a pan and tilt angle for a camera that are to capture a more detailed view of the selected position comprises calculating the pan and tilt angle for the selected position based on the selected position in the overview image, and compensating the resulting pan tilt angle based on an error function achieved by means of the process described above. In this way, the precision of the transformation may be increased.
A further scope of applicability of the present invention will become apparent from the detailed description given below. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the scope of the invention will become apparent to those skilled in the art from this detailed description. Hence, it is to be understood that this invention is not limited to the particular component parts of the device described or steps of the methods described as such device and method may vary. It is also to be understood that the terminology used herein is for purpose of describing particular embodiments only, and is not intended to be limiting. It must be noted that, as used in the specification and the appended claim, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements unless the context clearly dictates otherwise. Thus, for example, reference to “a sensor” or “the sensor” may include several sensors, and the like. Furthermore, the word “comprising” does not exclude other elements or steps.
Other features and advantages of the present invention will become apparent from the following detailed description of a presently preferred embodiment, with reference to the accompanying drawings, in which
Further, in the figures, like reference characters designate like or corresponding parts throughout the several figures.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe present invention relates to calibration of the positioning of a camera head in a monitoring camera. Referring to
The dome camera further comprises a wide angle lens 20 mounted on the transparent dome cover 14 and extending from the dome cover 14 and away from the camera head 12. The wide angle lens 20 is mounted in a direction making the optical axis 22 of the wide angle lens substantially coincide with a rotational axis 24 around which the camera head 12 is turned during panning, hereinafter referred to as panning axis 24. The viewing angle of the wide angle lens 20 is wider than the viewing angle of the lens 18 in the camera head 12. In one embodiment, the viewing angle of the wide angle lens 20 is substantially wider than the viewing angle of the lens 18 of the camera head 12. The viewing angle of the wide angle lens may be more than 180 degrees. However, depending on the application, the viewing angle may be less or more. The angle should at least be selected to provide a reasonable overview image.
Accordingly, the wide angle lens 20 is mounted so that the optical axis 26 of the camera head 12 is aligned with the optical axis 22 of the wide angle lens 20 when the camera head 12 is directed for capturing an image through the wide angle lens 20.
Due to the positioning of the wide angle lens 20 and the fact that the camera head 12 is moveable, it is possible to capture overview images through the wide angle lens 20 as depicted in
In one embodiment, the viewing angle or the focal length of the lens 18 of the camera head 12 may be selected so that the images captured by the camera head 12, when not captured through the wide angle lens 20, are adequate for providing relevant surveillance information. Examples of relevant surveillance information may for instance be the registration number of a car, an identifiable face of a person, detailed progress of an event, etc. The viewing angle of the wide angle lens 20 may be selected so that the camera head 12 may capture an image view of at least the floor of an entire room in which the monitoring camera is installed when directed to capture images through the wide angle lens 20.
Alternatively, the viewing angle of the wide angle lens 20 is selected so that the camera head 12 will capture an overview image of the monitored area when the camera head 12 is directed to capture images through the wide angle lens 20. Then an operator or an image analysis process may identify events or features of interest in the overview and redirect the camera head 12 for direct capture of the scene including the event or feature of interest. “Direct capture” in the above sentence should be understood as capturing an image by means of the camera head 12 when not directed to capture images through the wide angle lens 20.
In order to facilitate the understanding of the function of the camera, an example scenario will be described below. In this example scenario, a monitoring camera 10 according to one embodiment of the invention is installed in the ceiling of a room 30, see
According to one embodiment, see
The image sensor 50 may be any known image sensor able to capture light representing an image view and convert the light to electrical signals, which then may be processed into digital images and or digital image streams by the image processing unit 52. Thus, the image sensor 50 may be arranged to capture visible light or infrared light, depending on the application of the camera. The image data from the image sensor 50 is sent to the image processing unit 52 via connection 70. The image processing unit 52 and the general processing unit 54 may be the same device, may be implemented as separate units on the same chip, or may be separate devices. Moreover, many functions described below as being performed in the image processing unit 52 may be performed in the general processing unit 54 and vice versa.
The processing units 52, 54 are connected to the volatile memory 56 for use as a work memory via for instance a bus 72. Moreover, the volatile memory 56 may be used as temporary data storage for image data during processing of the image data and the volatile memory 56 may therefore be connected to the image sensor 50 as well. The non-volatile memory 58 may store program code required for the processing units 52, 54 to operate and may store settings and parameters that are to be preserved for a longer time period and even withstand power outages. The processing units 52, 54 are connected to the non-volatile memory 58 via, for instance, the bus 72.
The network interface 60 includes an electrical interface to the network 74, which the monitoring camera is to be connected to. Further, the network interface 60 also includes all logic interface parts that are not implemented as being executed by the processing unit 54. The network 74 may be any known type of LAN (Local Area Network), WAN (Wide Area Network), or the Internet. The person skilled in the art is well aware of how to implement a network interface using any of a plurality of known implementations and protocols.
The panning motor 62 and the tilting motor 66 are controlled by the processing 54 unit via each motor controller 64, 68. The motor controllers are arranged to convert instructions from the camera position controller 61 into electrical signals compatible with the motors. The camera position controller 61 may be implemented by means of code stored in memory 58 or by logical circuitry. The tilt motor 66 may be arranged within or very close to a panable/tiltable camera head 12 and the pan motor 62 are in many cases arranged further away from the camera head 12, in particular in the cases where the joint for panning is the second joint, counted from the camera head 12. Control messages for pan and tilt may be received via the network 74 and processed by the processing unit 54 before forwarded to the motor controllers 64, 68.
Other implementations of the monitoring camera 10 are evident to the person skilled in the art.
The above described function of redirecting the camera head from capturing overview images to capturing detailed images of positions indicated in an image captured in overview mode may be implemented by transforming the coordinates of the indicated position within the overview image to pan and tilt angles for positioning the camera in detailed mode to capture an image of the indicated position.
Now referring to
Using the above equations, the pan angle ψ is simply calculated by applying trigonometry to the Cartesian coordinates, see Equation 3. The tilt angle φ, according to this specific embodiment, is an approximation in which the calculation approximates the features of the image captured as being positioned on a spherical surface 702, thus arriving at the Equation 2, in which the tilt angle φ is calculated by applying the distance ratio of the distance d and the distance r to the total view angle θ. This embodiment is not necessarily limited to this transformation scheme, but any like transformation scheme may be used.
Depending on the application of the camera, the level of precision required of the transformation may vary. However, even in applications not requiring very high precision, a user will expect the point selected in the overview to appear quite close to the center of the image in the detailed view. The resulting transformation from a system offering less quality transformations is illustrated in
The offset, ex and ey, may result from the act of mounting the wide angle lens 20 on the dome 14, the mounting of the dome 14 on the dome base 16, the mounting of the camera head 12 in the housing, etc. The camera assembly 10 requires tight mechanical tolerances in order not to introduce offset problems. For example, offset problems may occur if a mounting screw for any one of the dome, the camera, the wide angle lens, etc. is over tightened a bit too much. Hence one common problem is that the optical axis 22 of the wide angle lens 20 and the optical axis 26 of the camera head 12 is offset, moff, when the camera head is arranged having its optical axis 26 coinciding with the rotational axis 24 for panning the camera head, see
One example of the image sensor not being effectively utilized is depicted in
According to one embodiment, the problem of the image sensor not being effectively utilized is addressed by calibrating the camera system. The calibration in this embodiment includes the act of tilting the camera head 12 and moving the center point 1002 of the projected image view to a position on an imaginary center-line 1006, dividing the image sensor in two halves, see
As shown in
This center CCy of the projected image is then related to the center of the image censor CSy, also along the y-axis, and the process checks, step 1112, if the center CCy of the projected image along the y-axis is substantially centered on the image sensor, i.e., if CCy=CSy±tolerance. In step 1114, a determination is made as to if the center CCy of the projected image is determined to correspond to the center CSy of the image sensor in the y-direction, or if the counter C counting the number of times this check and repositioning of the camera head has been performed n times. According to one embodiment, the value of n is three. The value n defines how many times the repositioning and checking of the camera head is allowed to be performed. One reason for this parameter is to avoid that the system get stuck in a loop, such as for instance, if the system for some reason is not able to determine the center. Hence, the value of n may be any number as long as it is small enough not to result in a perception of deteriorated performance.
If the check in step 1114 is false, the camera head is tilted based on the offset of between the center CCy of the projected image along the y-axis and the center of the image sensor along the y-axis, step 1116. According to one embodiment, an angle αtilterr that the camera head 12 is to be tilted is calculated based on equation 4 below:
where
-
- αtilterr: the angle that the camera head should be tilted
- βtotch: the total viewing angle from camera head alone
- ey: distance from the center CSy of the image sensor to center of captured circular image CCy
- h: height of image sensor
Then the counter C is incremented, step 1118, and the process is returned to step 1106 for capturing a new image at the new position of the camera head and for checking if the new position is more accurate.
If the check in step 1114 is true, then the position of the camera head is stored in memory in order to be used as the position to return the camera head to when entering overview mode, step 1120, i.e., the overview mode coordinates or angles are set for the camera head. Then, when the overview position for the camera head has been determined, the calibration process for positioning of the camera head in overview mode is concluded.
According to one embodiment, the center of the projected image is found using a Hough Transform and the projected image is substantially circular. Further information of the Hough Transform may be found in “Computer Vision”, by Shapiro, Linda and Stockman, Prentice-Hall, Inc. 2001.
In order to apply the Hough Transform for finding the parameters of the circle represented by the image projection, the boundary of the image projection has to be determined. This may be performed using any standard edge detection algorithm. When the edge of the projected image has been detected, the Hough Transform is applied to the detected edge. The processing using the Hough Transform may be accelerated by defining a limited range of possible circle radiuses for the analysing by means of the Hough transform. This may be achieved as a result of it being possible to have a quite accurate idea of the size of the circle shaped image projection. In one particular embodiment, the radius used in the Hough transform is in the range of 0.25-0.35 times the width of the image sensor. For a system having a sensor that has a width of 1280 pixels, the radius is selected in the range of 320 pixels-448 pixels.
In order to speed up the process even more and still get reliable results the radius may be selected from the range of 0.2921875-0.3125 times the width of the image sensor in pixels. For a system having a sensor that has a width of 1280 pixels the radius may be selected in the range of 374 pixels-400 pixels.
The above process of centering the image projection on the images sensor may be part of a calibration scheme for decreasing the offset between a selected position in the overview image and the center position in the detailed image view, see discussion relating to
A calibration scheme for decreasing the offset is described in the flowchart of
Thereafter, the camera is panned and tilted in accordance with the pan angle and the tilt angle, entering the camera into detailed mode, and detailed image is captured when camera is in detailed mode after panning and tilting in accordance with the pan angle and the tilt angle is completed, step 1212.
In the system not calibrated for transition errors, the transition to detailed mode seldom results in the selected feature occurring in the center of the captured image in the detailed image view. The operator may manually adjust pan angle and tilt angle until the camera presents the point of the feature selected in the overview image substantially in the center of the image captured in detailed mode, step 1214. The camera is capturing images frequently during this panning and tilting in order to present the result of the panning and tilting to the operator. Information indicating the difference between the manually adjusted pan angle and tilt angle and the pan angle and the tilt angle resulting from the transforming of the position of the feature is now saved, step 1216, for later use. The steps of selecting features and registering the error in the transition from overview mode to detailed mode may then be repeated for at least two further selection areas, i.e. n=3 in step 1218. The value of n may be any number equal to or greater than three. The value of n should however not be too large, because then the calibration will be quite time consuming. In order to balance accuracy and speed the value of n may be in the range of 4-10.
When steps 1208-1216 have been performed n times, then the process estimates a function ƒψerr(ψcalc), representing a pan error, in which ψcalc is the pan angle directly transformed from the overview coordinates, and a function ƒφerr(ψcalc), representing a tilt error, which also is depending on the pan angle ψcalc. These functions ƒψerr(ψcalc) and ƒφerr(ψcalc) represent the errors based on the saved information, step 1220, and the calibration process is ended. The estimated function may now be used for compensating in transformations between positions in the overview mode and pan and tilt angles in detailed mode, i.e., the function may be applied on operations including transforming coordinates from the overview image to pan and tilt angles.
In
According to another embodiment, see
On the other hand, if γmax is not smaller than the predetermined threshold γthresh value, step 1520, then a suggested selection area is positioned between, in circumferential direction, the two positions presenting the largest angle γmax between them. The new suggested selection area is positioned substantially at an angle of γmax/2 from any one of the two adjacent positions. Then the process returns to step 1508 for selection of another calibration position.
Then the operator may select a new calibration position 1604b over a suitable feature. It should be noted that the suggested selection areas 1602a-d are not necessary the only areas possible to select calibration positions within but rather indicates suitable areas for selecting calibration positions. After selection of the second calibration position 1604b, the selection process continues and eventually presents a new suggested selection area 1602c for the operator. The position of this new suggested selection area 1602c is once more calculated as half the angle γmax/2 of the largest angle γmax between two circumferentially neighbouring selected calibration positions. In
The next suggested selection area 1602d is then positioned substantially at an angle marginally less than 90 degrees from the lines 1606 or 1612 forming the largest angle γmax. The operator may then select the next calibration position 1604d and the process continues, unless this is the last calibration position to be recorded.
From the above discovered positional errors when directing the camera towards a calibration position, the error functions may be estimated. The error functions ƒψerr(ψcalc) and ƒφerr(ψcalc) may be estimated by a trigonometric function or a polynomial. The advantage of using a polynomial estimation is that a Linear Least Square Estimation (LLSE) may be used to find the coefficients of the functions. How to estimate a polynomial function using LLSE is well known to the person skilled in the art.
When the error functions has been estimated, an error estimation of the pan-tilt angle at any calculated pan-tilt position may be retrieved using the values produced by the transform from coordinates, x, y, to the pan tilt position ψcalc,φcalc.
Accordingly, the error in each pan-tilt position may be calculated according to Equations 5 and 6:
eψ=ƒψerr(ψcalc) Equation 5
eφ=ƒφerr(ψcalc) Equation 6
Then, the error compensated pan-tilt position ψ, φ may be calculated according to Equations 7 and 8:
ψ=ψcalc+eψ Equation 7
φ=φcalc+eφ Equation 8
The resulting pan-tilt position, i.e., the one determined by adjusting using the error function, is of substantially higher accuracy than a position calculated without using the error compensation.
Transformations from a pan-tilt position ψ,φ to an overview position x, y, may also be necessary. One case in which such transformation may be of interest is if an object in the detailed view has to be masked and the operator returns the camera to the overview mode. In this case the masked object should be masked in the overview mode as well as in the detailed mode. In order to achieve such result the position in the detailed mode has to be transformed to a position in the overview.
According to one embodiment, the transformation from pan-tilt positions ψ,φ to overview positions x,y, is based on equations inverting the transformation described in relation to Equations 1-3. According to one embodiment, the transformation includes calculating the distance d in the overview, see
In order to make true overview coordinates of the values of dx and dy the process further adjusts for the coordinates of the center point xc, yc, used as reference for the pan angle ψ, for the distance d and for dx and dy. Moreover, in this transform, as well as in the transform for transformation from overview to detailed view, misalignment errors are present and therefore error functions for x and y, respectively, has to be determined. In one embodiment, these functions are estimated from substantially the same calibration process and data as described above. More specifically, the calibration process may be identical to any of the calibration processes described in connection with
Hence, the error in the transformation operation for a calibrated system may be expressed as:
ex=ƒx(ψ) Equation 12
ey=ƒy(ψ) Equation 13
Accordingly the transformed and compensated coordinates in overview mode may be expressed as:
x=xc+dx+ex Equation 14
y=yc+dy+ey Equation 15
As previously mentioned, this transformation from the detailed mode to the overview mode may be used in correctly positioning masks. For instance, masks for the pan-tilt cameras are often defined in pan-tilt angles and by introducing a more precise transformation method theses masks may simply be recalculated for the overview instead of being re-entered for the overview coordinates and thus facilitating the setting up of the monitoring camera. Another use of this transformation is that it enables improvements to the interface. For example, when the camera is in detailed mode, depicted in
Claims
1. A method for calibrating a camera including an image sensor having a center position, the method comprising:
- capturing an image view projected onto the image sensor by a lens, the projected image view having a center,
- detecting at least one image view boundary of the image view captured by the image sensor,
- determining, in at least one dimension, a projection center position corresponding to the center of the projected image view on the image sensor based on detected boundary,
- determining offset between the projection center position and the sensor center position, defined in at least one dimension, corresponding to the center of the image sensor capturing the projected image view, and
- moving the image sensor in relation to the lens based on the offset in order to arrive at a substantially zero offset in at least one dimension between the center of image sensor and the center of the projected image view.
2. The method according to claim 1, further comprising:
- performing an additional capture of the image view projected onto the image sensor at a point in time after the moving of the sensor,
- detecting at least one image view boundary relating to the latter captured image view projected onto the image sensor,
- determining, in at least one dimension, an updated projection center position corresponding to the center of the projected image view on the image sensor based on latter detected boundary,
- determining offset between the updated projection center position and the sensor center position, defined in at least one dimension, corresponding to the center of the image sensor capturing the projected image view, and
- if the offset is not zero or not substantially zero then moving the image sensor in relation to the lens based on the offset in order to arrive at a substantially zero offset in at least one dimension between the center of image sensor and the center of the projected image view.
3. The method according to claim 1, wherein the image view is projected onto the image sensor through a wide angle lens which is producing a circular image or at least a substantially circular image.
4. The method according to claim 3, wherein the determining the position of the center of the projected image view includes calculating parameters defining a circle that are likely to represent the at least one detected edge relating to the projected image view.
5. The method according to claim 4, wherein calculating parameters defining a circle that are likely to represent the at least one detected edge relating to the projected image view is based on a Hough transform.
6. The method according to claim 1, wherein the camera comprises a pan and tilt enabled camera head including the image sensor, wherein the image view is projected onto the image sensor through a lens fixedly arranged in relation to a base of the camera, and wherein the moving of the image sensor includes moving of the camera head in relation to the fixedly arranged lens.
7. The method according to claim 6, wherein the moving of the camera head is performed as pan and/or tilt movements calculated from the offset between the center of projected image view and center of the image sensor capturing the projected image view.
8. The method according to claim 6, further comprising saving a tilt angle of the camera head in response to the offset between the projection center position and the sensor center position being determined to be zero or substantially zero.
9. A method for calibrating a camera comprising:
- a) selecting a position in an overview image view captured by the camera and presented to the operator,
- b) calculating a pan angle and a tilt angle corresponding to the selected position,
- c) moving the camera head to a position in which it may capture a detailed image view, not through the wide angle lens, having a center representing the calculated pan angle and tilt angle,
- d) adjusting the camera head, after the camera head has moved to the position, until the position selected in the overview image is centered in the detailed view,
- e) saving data representing the amount of adjustment used in for centering the position in detailed view,
- repeating the steps (a-e) until a predetermined amount of data relating to the adjustment has been saved, and
- estimating an error-function based on the saved data.
10. The method according to claim 9, wherein the act of estimating an error function includes determining coefficients of a polynomial error function using Linear Least Square Estimation.
11. A method for transforming a selected position in an overview image captured by a camera to a pan and tilt angle for a camera that are to capture a more detailed view of the selected position, the method comprising:
- calculating the pan and tilt angle for the selected position based on the selected position in the overview image, and
- compensating the resulting pan tilt angle based on an error function achieved by means of the following process:
- a) selecting a position in an overview image view captured by the camera and presented to the operator,
- b) calculating a pan angle and a tilt angle corresponding to the selected position,
- c) moving the camera head to a position in which it may capture a detailed image view, not through the wide angle lens, having a center representing the calculated pan angle and tilt angle,
- d) adjusting the camera head, after the camera head has moved to the position, until the position selected in the overview image is centered in the detailed view,
- e) saving data representing the amount of adjustment used in for centering the position in detailed view,
- repeating the steps (a-e) until a predetermined amount of data relating to the adjustment has been saved, and
- estimating an error function based on the saved data.
Type: Application
Filed: Mar 14, 2013
Publication Date: Oct 3, 2013
Applicant: AXIS AB (Lund)
Inventors: Niklas Hansson (Horby), Andreas Palsson (Malmo)
Application Number: 13/828,058
International Classification: H04N 17/00 (20060101);