Method, an apparatus and a computer program product for focusing
A method for focusing may include receiving a first image stack of a first field of view, the first image stack including images captured with different focus from the first field of view; determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus; determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths; and estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
Latest GRUNDIUM OY Patents:
- Continuous image stitching during imaging
- Colour calibration of an imaging device
- Method for image stitching
- Microscope comprising a movable objective-camera system with rolling shutter camera sensor and a multi-color strobe flash and method for scanning microscope slides with proper focus
- Imaging device
Various example embodiments relate to the field of digital imaging.
BACKGROUNDTo get a sharp image of an object, the surface of the object needs to lie within the focus range of an imaging system. The distance to the focus range is determined by the optical configuration of an imaging system. The relative movement between the imaging system and the imaged object may cause the object surface to lie outside the focus range, and the optical configuration of the imaging system needs to be adjusted to retain the focus.
When scanning samples with a digital microscope scanner each field of view may need to be focused separately to keep sample surface within the focus range. Focusing takes time and increases the overall scanning time. There is, therefore, a need for a solution that reduces the time needed for focusing.
SUMMARYVarious aspects include an apparatus, a method and a computer program product comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments are disclosed in the dependent claims.
According to a first aspect, there is provided a method for focusing, comprising receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view; determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus; determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths; and estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
According to an embodiment, the determining the first local sample thickness and the first sample tilt comprises eliminating effects caused by apparatus-specific measures from the spatial distribution of focus depths.
According to an embodiment, the apparatus-specific measures comprise one or more of a pre-defined field curvature and a pre-defined optical axis tilt.
According to an embodiment, the determining the first local sample thickness and the first sample tilt comprises subtracting terms accounting for pre-defined field curvature and/or pre-defined optical axis tilt from a focus depth model; determining coefficients defining the first local sample thickness and the first sample tilt by applying a linear estimation approach to the focus depth model.
According to an embodiment, the focus depth model comprises coordinate functions transforming an image pixel coordinate and a stage control coordinate pair to stage coordinates, and the method further comprises capturing, at a location according to the stage control coordinates, an image of a known target located at a location according to the stage coordinates on a calibration slide; measuring an image pixel location according to image pixel coordinates from the image of the target; forming a pair of equations modeling a transformation from the image pixel coordinate and the stage control coordinate pair to the stage coordinates; repeating the capturing, measuring and forming at least five times, wherein the known target is located at different location each time; determining coefficients of the transformation by applying a linear estimation approach to the pairs of equations.
According to an embodiment, the method further comprises receiving a third image stack of a third field of view, the third image stack comprising images captured with different focus from the third field of view and wherein the first field of view and the third field of view are adjacent fields of view for the second field of view; determining, from the third image stack, a third spatial distribution of focus depths in which different areas in the third field of view are in focus; determining a third local sample thickness and a third sample tilt based on the third spatial distribution of focus depths; estimating, based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt, a focus setting for capturing a second image stack from a second field of view.
According to an embodiment, the method further comprises estimating a first focus setting based on the first local sample thickness and the first sample tilt; estimating a third focus setting based on the third local sample thickness and the third sample tilt; averaging the first focus setting and the third focus setting to obtain the focus setting for capturing the second image stack from the second field of view.
According to an embodiment, the method further comprises assigning a weight for the first focus setting based on the distance between the first field of view and the second field of view.
According to an embodiment, the method further comprises assigning a weight for the first focus setting based on a planarity of the first spatial distribution of focus depths.
According to a second aspect, there is provided an apparatus comprising at least one processor; at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view; determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus; determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths; and estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
According to an embodiment, the focus depth model comprises coordinate functions transforming an image pixel coordinate and a stage control coordinate pair to stage coordinates, and the apparatus is further caused to perform capturing, at a location according to the stage control coordinates, an image of a known target located at a location according to the stage coordinates on a calibration slide; measuring an image pixel location according to image pixel coordinates from the image of the target; forming a pair of equations modeling a transformation from the image pixel coordinate and the stage control coordinate pair to the stage coordinates; repeating the capturing, measuring and forming at least five times, wherein the known target is located at different location each time; determining coefficients of the transformation by applying a linear estimation approach to the pairs of equations.
According to an embodiment, the apparatus is further caused to perform receiving a third image stack of a third field of view, the third image stack comprising images captured with different focus from the third field of view and wherein the first field of view and the third field of view are adjacent fields of view for the second field of view; determining, from the third image stack, a third spatial distribution of focus depths in which different areas in the third field of view are in focus; determining a third local sample thickness and a third sample tilt based on the third spatial distribution of focus depths; estimating, based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt, a focus setting for capturing a second image stack from a second field of view.
According to an embodiment, the apparatus is further caused to perform estimating a first focus setting based on the first local sample thickness and the first sample tilt; estimating a third focus setting based on the third local sample thickness and the third sample tilt; averaging the first focus setting and the third focus setting to obtain the focus setting for capturing the second image stack from the second field of view.
According to an embodiment, the apparatus is further caused to perform assigning a weight for the first focus setting based on the distance between the first field of view and the second field of view.
According to an embodiment, the apparatus is further caused to perform assigning a weight for the first focus setting based on a planarity of the first spatial distribution of focus depths.
According to an embodiment, the apparatus is a digital microscope scanner.
According to a third aspect, there is provided computer program product embodied on a non-transitory computer readable medium, comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to: receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view; determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus; determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths; and estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which
Drawings are schematic.
DESCRIPTION OF EXAMPLE EMBODIMENTSIn the following, several embodiments will be described in the context of digital microscope scanners. It is to be noted, however, that the invention is not limited to microscope scanners. In fact, the different embodiments have applications in any environment where focusing in digital imaging is required.
Microscopes are instruments that may be used to aid humans to see a magnified view of small samples, e.g. cells or wood fibres. When scanning objects, or samples, with digital microscope scanners each field of view needs to be focused carefully. When the object is in focus, each point on the camera sensor is a conjugate point to a point on the object surface. In other words, a focused and sharp image of an object is formed on the camera sensor. However, if the position of the sample is changed, the image formed on the camera sensor is not focused.
The scanning apparatus 100 may comprise a radiator 112 for cooling purposes. Thermal energy may at least partly be transferred from the scanning apparatus 100 to surrounding air via body of the scanning apparatus 100. The scanning apparatus 100 comprises a diffuser 102 for forming a uniform light source. The scanning apparatus 100 comprises a collector lens 103 for gathering light from the diffuser 102. The scanning apparatus 100 comprises a diaphragm 104. The diaphragm may comprise an aperture. Size of the aperture may be constant or adjustable. The diaphragm 104 may be e.g. a rotating disk comprising different sized apertures. The diaphragm may comprise a blade structure with movable blades for adjusting the size of the aperture. The size of the aperture regulates the amount of light that passes through into a specimen under investigation. The scanning apparatus 100 comprises a condenser lens 105 for focusing light to a specimen i.e. sample 150.
The specimen 150 is attached on a slide 106. The scanning apparatus 100 comprises a stage 111 for the slide 106. The stage may comprise a hole for passing light through to illuminate the specimen 150. The specimen 150 may be set under a cover glass 107.
The scanning apparatus 100 comprises an objective 108 for collecting light from the specimen 150. The objective may be characterized by its magnification and numerical aperture. The objective comprises a plurality of lenses 120, 121, 122. Distance between the objective 108 and the specimen is a working distance WD.
The objective 108 may be an infinity corrected objective. In infinity corrected objective systems an image distance is set to infinity. In infinity corrected objective systems a tube lens may be used to focus the image. The scanning apparatus 100 may comprise a tube lens 109. The tube lens focuses the light passing through the objective on a camera sensor 110. The tube lens 109 shortens an optical path of the light. By using tube lens to shorten the optical path of the light, the size of the scanning apparatus 100 may be reduced. The tube lens 109 reduces magnification.
The tube lens may be one lens or a system of more than one lens. The tube lens 109 may be a shape-changing lens, i.e. the focus is changed by changing the shape of the lens. An example of a shape-changing lens is a liquid lens. The liquid lens is a lens structure comprising two liquids that don't mix with each other, e.g. oil and water. Curvature of the liquid-liquid interface may be changed by applying electricity to the lens structure. Thus, the focal length of the liquid lens may be adjusted electronically. Another example of a shape-changing lens is based on a combination of optical fluids and a polymer membrane. A container is filled with an optical fluid and sealed off with a thin, elastic polymer membrane. The shape of the lens is changed by pushing a circular ring onto the center of the membrane or by exerting a pressure to the outer part of the membrane or by pumping liquid into or out of the container. The ring may be moved manually or electrically.
The scanning apparatus 100 comprises a camera sensor 110 for capturing images of the specimen. The camera sensor may be a e.g. a charge-coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) or an electron-multiplying CCD (EMCCD).
To get properly focused and sharp images of the specimen, the optical configuration, or focus, of the system may be adjusted. Focus may be adjusted in several ways, e.g. by changing the distance between the objective 108 and the sample, i.e. the working distance WD and/or by changing the focal length of one of the lens element within the objective-tube lens system and/or by changing the distance between the camera sensor 110 and the tube lens 109 along the optical axis, i.e. z-axis.
The focus of the system may be changed e.g. by changing the working distance WD. The working distance WD may be changed e.g. by moving the objective-camera system OSC along the z-axis and/or by moving the specimen stage 111 along the z-axis. The stage 111 is configured to change position. The scanning apparatus 100 comprises a motor 211 for moving the stage 111. The stage 111 is configured to move in different directions, e.g. x, y and z. Z-axis is determined as parallel to the optical axis. X-axis and y-axis are orthogonal to the z-axis. The objective-camera system OCS is configured to change position. The scanning apparatus 100 comprises a motor 208 for moving the objective-camera system OCS. The objective-camera system OCS may be moved along z-axis.
The focus of the system may be changed e.g. by changing the distance between the camera sensor 110 and the tube lens 109. The distance between the camera sensor 110 and the tube lens 109 may be changed e.g. by moving the camera sensor 110 along the z-axis and/or by changing the focal length of the liquid tube lens. The camera sensor 110 may be configured to change position along the z-axis. The scanning apparatus 100 may comprise a motor 210 for moving the camera sensor 110.
The scanning apparatus 100 comprises a control unit 250. The control unit may comprise or may be connected to a user interface UI1. The user interface may receive user input e.g. through a touch screen and/or a keypad. Alternatively, the user interface may receive user input from internet or a personal computer or a smartphone via a communication connection. The communication connection may be e.g. a Bluetooth connection or a WiFi connection. The control unit may comprise e.g. a single board computer. The control unit is configured to control operation of the scanning apparatus. For example, the control unit may be configured to operate the motors 208, 210, 211.
In a digital scanning microscope, the best focus may vary within a single field of view, and between different fields of view of the sample specimen.
If optical axis 260 is not perpendicular to the stage movement direction 265, focus depth may vary within a single field of view. Field curvature 262 caused by curved nature of optical elements, schematically depicted in the
In addition to the reasons already mentioned, the focus depth may be affected by other factors when moving from one field of view to another. A sample plane 264 may be tilted with respect to the stage movement direction 265. The sample plane 264 is a surface of a sample slide on which the sample is placed. In addition, the sample plane 264 and/or the sample 150 itself may have a non-uniform thickness. These factors may cause focus depth variation within one field of view and when moving from one field of view to another.
Different colours of the sample may be focused at different distances (axial/longitudinal chromatic aberration). Focus depth estimation may be performed for each colour separately. Alternatively, focus depth estimation is carried out for one colour and the focus depth is then estimated for the other colours based on a known relationship between focus depth of different colours. The relationship may be estimated beforehand by calibration measurements.
The focus of an individual image or image patch may be determined e.g. by filtering the image with a high-pass filter or a band-pass filter. The more high frequencies are present, the better is the focus.
One way to get properly focused images of each field of view is to capture a stack of images of the field of view with different focus. Then, it is possible to determine a frame with the best focus from the stack of images and form a final image of the whole sample by combining frames in best focus of each field of view. In addition, a number of frames with focus near the best focus may be saved when the sample thickness exceeds the focus range of the imaging system to form multi-layer image of the sample. Information in the different layers of the image may be of interest for e.g. a pathologist.
Alternatively, a best focus depth may be determined from the stack of images. For example, a smaller number of frames is captured with different focus resulting in a sparser frame stack. Sparser here means that the difference in focus depth is larger between consecutive frames than in the previous example, wherein a frame with the best focus from the stack of images is selected for the final image. Then, the best focus depth may be estimated to lie in between two captured frames, and the actual image capture may be performed with the estimated best focus depth. Yet further example is to use the information on best focus depths in creation of a focus map. The focus map may be used to determine the correct focus depth for each field of view for the actual image scanning. However, in all of these examples, the more frames are captured for each stack of images for focusing purposes, the more time is consumed when scanning the sample.
The whole field of view is typically not in the best possible focus with the same focus depth. Spatial distribution of the best focus depth may be approximated within the field of view.
Different tones in the map depicts different focus depths, i.e. the focus depths with which the best focus is achieved in different areas of the field of view. The best focus may be determined by filtering the image with a high-pass filter or a band-pass filter. The more high frequencies are present, the better is the focus. Thus, it may be determined from the captured stack of images from the field of view with different focus depth setting, in which focus depth the object is in best focus. The variation in focus depth in the field of view FOV0 that is visible in the
The method provides estimation of the best focus depth of a target field of view based on the acquired stack of images of an adjacent field of view. Adjacent may mean neighboring field of view, i.e. field of view surrounding the target field of view. Alternatively, there may be some fields of view or pixels between the target field of view and the adjacent field of view. The method of focusing presented herein reduces the scanning time of the whole sample. Reduction in scanning time is achieved, since the number of frames of an image stack to be acquired from a field of view in order to determine the best focus may be reduced.
The first image stack may be captured by the scanning apparatus, and/or received from the memory of the scanning apparatus. Alternatively, the first image stack may be received from an external memory.
The method may further comprise capturing the second image stack from a second field of view using the estimated focus setting.
The field of view FOV1 is the first field of view. The field of view FOV2 is the second field of view. The second field of view is the target field of view, a focus setting for which is estimated by the method presented herein.
Focus depth within a single field of view may be modelled by a focus depth model. The focus depth model may be a following equation (1):
z(u,v,m,n)=k((u−uc)2+(v−vc)2)+ao(u−uc)+bo(v−vc)+ass(u,v,m,n)+bst(u,v,m,n)+c, (1)
wherein (u, v) are pixel coordinates of the captured image, (m, n) are stage motor control coordinates that determine which part of the sample stage is imaged, z(u, v, m, n) is the measured focus depth at the given pixel and motor position and (uc, vc) is the image pixel location where the optical axis pierces the image plane. Coordinates (s, t) are coordinates on the stage surface that can be expressed as functions from given pixel coordinates (u, v) and stage motor control coordinates (m, n). Determination of these coordinate functions may be carried out by measuring the movement of imaged calibration targets with known positions on a sample stage when the stage is moved.
s(u,v,m,n)=suu+svv+smm+snn+s0,
t(u,v,m,n)=tuu+tvv+tmm+tnn+t0,
where the coefficients su, sv, sm, sn, s0 and tu, tv, tm, tn, t0 are estimated from the measurements. Several fixed target locations, i.e. known targets, (s1, t1), (s2, t2), . . . (si, ti) with known positions are chosen from a calibration slide 500, for example crossings on a grid slide, where the spacing of the grid 510 is known. The known targets may be located at different positions. Alternatively to the grid slide, the calibration slide may be a slide having other fixed target locations, such as lines, dots or other patterns. Then, several stage motor control coordinates (mj, nj) are chosen at which at least one of the stage locations (si, ti) is visible in the captured image 520. An image 530 shows a magnified view of the image 520 and a grid 540 represents the pixels of the image 530. If the location (si, ti) is visible in an image 530 captured at motor coordinates (mj, nj), its pixel location (uij, vij) can be measured from the image. This gives a pair of equations of the form
si=suuij+svvij+smmj+snnj+s0,
ti=tuuij+tvvij+tmmj+tnnj+t0.
At least five of such measurements from known targets each located at different location need to be made to be able to estimate the ten coefficients su, sv, sm, sn, s0 and tu, tv, tm, tn, t0 from the resulting linear system of ten equations. With more than five measurements, the effect of pixel location measurement error and motor location error can be reduced for example by applying the least-squares method to the resulting overdetermined system of linear equations. Also, extraneous measurements can be used to filter out erroneous detections of the calibration targets using outlier detection methods.
Referring back to equation (1), the coefficient k is the strength of the field curvature, a0 and bo are the planar trends caused to the focus depth by the optical axis tilt, and as and bs planar trends caused to the focus depth by the sample tilt. Coefficient c depends on the local sample thickness. Thus, the first term of the equation (1) models the field curvature, second and third terms model the optical axis tilt. Fourth and fifth terms model the sample tilt and the sixth term depends on the local sample thickness.
As described above, effects of the field curvature and the optical axis tilt are device specific and their effect on the focus depth may be taken into account via calibration measurements. A calibration sample having a known uniform thickness is scanned to estimate the device specific field curvature and the optical axis tilt. During calibration, focus depths z(u,v,m,n) of a sample of uniform thickness at multiple stage locations (s, t) with multiple motor configurations (m, n) are measured.
Given this data, i.e. the device specific field curvature and the optical axis tilt, and a priori knowledge of the optical axis center location (uc, vc) and the functions s(u, v, m, n) and t(u, v, m, n), each measurement z(u, v, m, n) constitutes an equation (1) with the unknown coefficients k, ao, bo, as,bs, and c, and multiple such measurements constitutes an overdetermined system of linear equations.
Coefficients of an overdetermined linear systems may be estimated with various methods and with different error minimization criteria. For example, estimation may be performed by minimizing the least squares error, and optionally by taking into account a priori information on the measurement error and/or the expected coefficient values.
The coefficients k, ao, bo corresponding the field curvature and optical axis tilt are device specific constants which are calculated via calibration measurements. The coefficients as, bs, and c are sample specific and can vary from stage position to another, i.e. can vary when moving from one field of view to another. They are an unused by-product of the calibration calculations, as they are specific to the used calibration slide.
The apparatus-specific measures may be eliminated from the spatial distribution of focus depths. During imaging of an unknown sample at known motor position (m, n), the focus depths z(u, v, m, n) are measured at multiple pixel locations (u, v), and the effects of field curvature and optical axis tilt are subtracted from the focus depths, and the coefficients as, bs, and c are determined from the adjusted focus depth data
z′(u,v,m,n)=z(u,v,m,n)−k((u−uc)2+(v−vc)2)−ao(u−uc)−bo(v−vc). (2)
That is, the coefficients as, bs, and c are estimated from adjusted measurement model
z′(u,v,m,n)=ass(u,v,m,n)+bst(u,v,m,n)+c (3)
by applying an overdetermined linear system of equations approach.
Once the coefficients as, bs, and c are estimated, the focus depth at a nearby motor position (m′,n′) can then be estimated as
z(uc,vc,m′,n′)=ass(uc,vc,m′,n′)+bst(uc,vc,m′,n′)+c (4)
at the image optical axis center (uc, vc).
According to an embodiment, the coefficients as,bs, and c may be determined at more than one locations around the target location (m′,n′). Multiple estimates obtained this way may be combined to obtain a better estimate. For example, the estimates may be averaged.
Alternatively the estimate may be a weighted average. The weight may be a confidence metric estimating reliability of the focus depth estimation. The weight may be dependent on the distance between the first field of view and the target field of view. Larger weight may be assigned when the estimate is based on the field of view which is next to the target field of view. Smaller weight may be assigned when the estimate is based on the field of view which is further from the target field of view. Alternatively, the weight may be dependent on the planarity of the measurements z′(u, v, m, n), i.e. the residual in the measurement model (3). Residual indicates how well the model fits to reality. Low planarity (high residual) suggests that the sample thickness is locally very non-uniform, and the focus depth at the neighboring fields of view cannot be reliably predicted from the current field of view.
(uc, vc)FOV1 is the distortion center of a first field of view FOV1. (uc, vc)FOV3 is the distortion center of a third field of view FOV3. The first field of view FOV1 and the third field of view FOV3 are the adjacent fields of view. (uc, vc)FOV2 is the distortion center of a second field of view FOV2. The second field of view FOV2 is the target field of view. The first field of view and the third field of view are adjacent fields of view for the second field of view.
A first image stack is received of a first field of view FOV1. A first spatial distribution of focus depth in which different areas in the first field of view are in focus is determined from the first image stack. A first local sample thickness and a first sample tilt is determined at the first field of view based on the first spatial distribution of focus depths. A third image stack of the third field of view FOV3 is received. It is noted that other image stacks may be captured from other fields of view between capturing the first image stack and the third image stack.
The third image stack comprises images captured with different focus at the third field of view. A third spatial distribution of focus depths in which different areas in the third field of view are in focus is determined from the third image stack. A third local sample thickness and a third sample tilt is determined based on the third spatial distribution of focus depths. A focus setting for capturing a second image stack of a second field of view is estimated using the equation (4) based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt. These two estimates based on the first image stack and the third image stack may be e.g. averaged.
A first focus setting based on the first local sample thickness and the first sample tilt may be estimated. A third focus setting based on the third local sample thickness and the third sample tilt may be estimated. The first focus setting and the third focus setting may be averaged to obtain the focus setting for capturing the second image stack from the second field of view.
Alternatively the estimate may be a weighted average. Let us compare the spatial distributions of focus depths in
Residual r1 for FOV1 and residual r3 for FOV3 may be calculated as follows:
wherein N is the number of measurements. The higher is the residual, the less reliable is a planar model for estimating the spatial distribution of focus depths in the FOV in question. Weights for the first focus setting (based on FOV1) and the third focus setting (based on FOV3) may be calculated as follows:
Image stacks of the fields of view may be acquired such that the camera sensor captures a first plurality of images from a first position at a first rate while focus setting is changed in continuous motion and a flash unit flashes at a second rate. The first rate and the second rate are synchronized. The captured first plurality of images comprises images captured with different focus setting from the first position.
The focus may be changed using the liquid lens. The focal length of the liquid lens may be controlled and adjusted by applying a driving current. The driving current, image capture and the flash unit may be synchronized such that a plurality of images from a desired position may be captured, wherein the captured plurality of images comprises images captured with different focus setting from the desired position.
The apparatus, e.g. a digital microscope scanner, may comprise means for receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view. The apparatus may comprise means for determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus. The apparatus may comprise means for determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths. The apparatus may comprise means for estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
The apparatus may comprise means for capturing the second image stack from the second field of view using the estimated focus setting.
The determining the first local sample thickness and the first sample tilt may comprise eliminating effects caused by apparatus-specific measures from the spatial distribution of focus depths. The apparatus-specific measures may comprise one or more of a pre-defined field curvature and a pre-defined optical axis tilt.
The determining the first local sample thickness and the first sample tilt may comprise subtracting terms accounting for pre-defined field curvature and/or pre-defined optical axis tilt from a focus depth model and determining coefficients defining the first local sample thickness and the first sample tilt by applying a linear estimation approach to the focus depth model.
The focus depth model may comprise coordinate functions transforming an image pixel coordinate and a stage control coordinate pair to stage coordinates. The apparatus may comprise means for capturing, at a location according to the stage control coordinates, an image of a known target located at a location according to the stage coordinates on a calibration slide. The apparatus may comprise means for measuring an image pixel location according to image pixel coordinates from the image of the target. The apparatus may comprise means for forming a pair of equations modeling a transformation from the image pixel coordinate and the stage control coordinate pair to the stage coordinates. The apparatus may comprise means for repeating the capturing, measuring and forming at least five times. The apparatus may comprise means for determining coefficients of the transformation by applying a linear estimation approach to the pairs of equations.
The apparatus may comprise means for receiving a third image stack of a third field of view, the third image stack comprising images captured with different focus from the third field of view and wherein the first field of view and the third field of view are adjacent fields of view for the second field of view. The apparatus may comprise means for determining, from the third image stack, a third spatial distribution of focus depths in which different areas in the third field of view are in focus. The apparatus may comprise means for determining a third local sample thickness and a third sample tilt based on the third spatial distribution of focus depths. The apparatus may comprise means for estimating, based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt, a focus setting for capturing a second image stack from a second field of view.
The apparatus may comprise means for estimating a first focus setting based on the first local sample thickness and the first sample tilt. The apparatus may comprise means for estimating a third focus setting based on the third local sample thickness and the third sample tilt. The apparatus may comprise means for averaging the first focus setting and the third focus setting to obtain the focus setting for capturing the second image stack from the second field of view.
The apparatus may comprise means for assigning a weight for the first focus setting based on the distance between the first field of view and the second field of view.
It is obvious that the present invention is not limited solely to the above-presented embodiments, but it can be modified within the scope of the appended claims.
Claims
1. A method for focusing, comprising:
- receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view;
- determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus;
- determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths;
- receiving a third image stack of a third field of view, the third image stack comprising images captured with different focus from the third field of view and wherein the first field of view and the third field of view are adjacent fields of view for a second field of view;
- determining, from the third image stack, a third spatial distribution of focus depths in which different areas in the third field of view are in focus;
- determining a third local sample thickness and a third sample tilt based on the third spatial distribution of focus depths; and
- estimating, based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt, a focus setting for capturing a second image stack from the second field of view.
2. The method according to claim 1, wherein the determining the first local sample thickness and the first sample tilt comprises
- eliminating effects caused by apparatus-specific measures from the spatial distribution of focus depths.
3. The method according to claim 2, wherein the apparatus-specific measures comprise one or more of a pre-defined field curvature and a pre-defined optical axis tilt.
4. The method according to claim 1, further comprising
- estimating a first focus setting based on the first local sample thickness and the first sample tilt;
- estimating a third focus setting based on the third local sample thickness and the third sample tilt;
- averaging the first focus setting and the third focus setting to obtain the focus setting for capturing the second image stack from the second field of view.
5. The method according to claim 4, further comprising
- assigning a weight for the first focus setting based on the distance between the first field of view and the second field of view.
6. The method according to claim 4, further comprising
- assigning a weight for the first focus setting based on a planarity of the first spatial distribution of focus depths.
7. A method for focusing, comprising: wherein determining the first local sample thickness and the first sample tilt comprises:
- receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view;
- determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus;
- determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths,
- subtracting terms accounting for pre-defined field curvature and/or pre-defined optical axis tilt from a focus depth model;
- determining coefficients defining the first local sample thickness and the first sample tilt by applying a linear estimation approach to the focus depth model; and
- wherein the focus depth model comprises coordinate functions transforming an image pixel coordinate and a stage control coordinate pair to stage coordinates, and the method further comprises: capturing, at a location according to the stage control coordinates, an image of a known target located at a location according to the stage coordinates on a calibration slide; measuring an image pixel location according to image pixel coordinates from the image of the target; forming a pair of equations modeling a transformation from the image pixel coordinate and the stage control coordinate pair to the stage coordinates; repeating the capturing, measuring and forming at least five times, wherein the known target is located at different location each time; and determining coefficients of the transformation by applying a linear estimation approach to the pairs of equations; and the method further comprises estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
8. An apparatus comprising at least one processor; at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform:
- receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view;
- determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus;
- determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths;
- receiving a third image stack of a third field of view, the third image stack comprising images captured with different focus from the third field of view and wherein the first field of view and the third field of view are adjacent fields of view for a second field of view;
- determining, from the third image stack, a third spatial distribution of focus depths in which different areas in the third field of view are in focus;
- determining a third local sample thickness and a third sample tilt based on the third spatial distribution of focus depths; and
- estimating, based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt, a focus setting for capturing a second image stack from the second field of view.
9. The apparatus according to claim 8, further caused to perform:
- estimating a first focus setting based on the first local sample thickness and the first sample tilt;
- estimating a third focus setting based on the third local sample thickness and the third sample tilt; and
- averaging the first focus setting and the third focus setting to obtain the focus setting for capturing the second image stack from the second field of view.
10. The apparatus according to claim 9, further caused to perform assigning a weight for the first focus setting based on the distance between the first field of view and the second field of view.
11. The apparatus according to claim 9, further caused to perform:
- assigning a weight for the first focus setting based on a planarity of the first spatial distribution of focus depths.
12. The apparatus according to claim 8, wherein the apparatus is a digital microscope scanner.
13. The apparatus according to claim 8, wherein the determining the first local sample thickness and the first sample tilt comprises eliminating effects caused by apparatus-specific measures from the spatial distribution of focus depths.
14. The apparatus according to claim 13, wherein the apparatus-specific measures comprise one or more of a pre-defined field curvature and a pre-defined optical axis tilt.
15. An apparatus comprising: wherein determining the first local sample thickness and the first sample tilt comprises:
- at least one processor;
- at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform:
- receiving a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view;
- determining, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus;
- determining a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths,
- subtracting terms accounting for pre-defined field curvature and/or pre-defined optical axis tilt from a focus depth model; and
- determining coefficients defining the first local sample thickness and the first sample tilt by applying a linear estimation approach to the focus depth model;
- wherein the focus depth model comprises coordinate functions transforming an image pixel coordinate and a stage control coordinate pair to stage coordinates, and the apparatus is further caused to perform:
- capturing, at a location according to the stage control coordinates, an image of a known target located at a location according to the stage coordinates on a calibration slide;
- measuring an image pixel location according to image pixel coordinates from the image of the target;
- forming a pair of equations modeling a transformation from the image pixel coordinate and the stage control coordinate pair to the stage coordinates;
- repeating the capturing, measuring and forming at least five times, wherein the known target is located at different location each time; and
- determining coefficients of the transformation by applying a linear estimation approach to the pairs of equations; and the apparatus is further caused to perform
- estimating, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
16. The apparatus according to claim 15, wherein the apparatus is a digital microscope scanner.
17. A computer program product embodied on a non-transitory computer readable medium, comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to:
- receive a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view;
- determine, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus;
- determine a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths;
- receive a third image stack of a third field of view, the third image stack comprising images captured with different focus from the third field of view and wherein the first field of view and the third field of view are adjacent fields of view for a second field of view;
- determine, from the third image stack, a third spatial distribution of focus depths in which different areas in the third field of view are in focus;
- determine a third local sample thickness and a third sample tilt based on the third spatial distribution of focus depths; and
- estimate, based on the first local sample thickness, the first sample tilt, the third local sample thickness and the third sample tilt, a focus setting for capturing a second image stack from the second field of view.
18. The computer program product according to claim 17, wherein the determining the first local sample thickness and the first sample tilt comprises
- eliminating effects caused by apparatus-specific measures from the spatial distribution of focus depths.
19. The computer program product according to claim 18, wherein the apparatus-specific measures comprise one or more of a pre-defined field curvature and a pre-defined optical axis tilt.
20. A computer program product embodied on a non-transitory computer readable medium, comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to: wherein determining the first local sample thickness and the first sample tilt comprises:
- receive a first image stack of a first field of view, the first image stack comprising images captured with different focus from the first field of view;
- determine, from the first image stack, a first spatial distribution of focus depths in which different areas in the first field of view are in focus;
- determine a first local sample thickness and a first sample tilt in the first field of view based on the first spatial distribution of focus depths,
- subtracting terms accounting for pre-defined field curvature and/or pre-defined optical axis tilt from a focus depth model;
- determining coefficients defining the first local sample thickness and the first sample tilt by applying a linear estimation approach to the focus depth model, wherein the focus depth model comprises coordinate functions transforming an image pixel coordinate and a stage control coordinate pair to stage coordinates, and wherein the computer program code is further configured to cause the apparatus or the system to:
- capture, at a location according to the stage control coordinates, an image of a known target located at a location according to the stage coordinates on a calibration slide;
- measure an image pixel location according to image pixel coordinates from the image of the target;
- form a pair of equations modeling a transformation from the image pixel coordinate and the stage control coordinate pair to the stage coordinates;
- repeat the capturing, measuring and forming at least five times, wherein the known target is located at different location each time; and
- determine coefficients of the transformation by applying a linear estimation approach to the pairs of equations; and the computer program code is further configured to cause the apparatus or the system to
- estimate, based on the first local sample thickness and the first sample tilt, a focus setting for capturing a second image stack from a second field of view.
20040256538 | December 23, 2004 | Olson |
20150264270 | September 17, 2015 | Watanabe |
Type: Grant
Filed: May 21, 2018
Date of Patent: Jul 2, 2019
Assignee: GRUNDIUM OY (Tampere)
Inventors: Matti Pellikka (Lempäälä), Markus Vartiainen (Tampere)
Primary Examiner: Xi Wang
Application Number: 15/984,832
International Classification: H04N 5/232 (20060101); G02B 21/24 (20060101); G06T 3/20 (20060101); G06T 3/00 (20060101); G02B 7/34 (20060101);