IMAGE GENERATING APPARATUS AND IMAGE GENERATING METHOD

An image generating apparatus which generates an observation image from an original image obtained by imaging a subject, has a viewpoint image generating unit configured to generate a plurality of viewpoint images having mutually different line-of-sight directions by using the original image; and an observation image generating unit configured to generate an image, for which a scattered light component included in the original image has been extracted or enhanced, as the observation image by using the plurality of viewpoint images. The observation image is generated, for example, by extracting or enhancing a difference among the plurality of viewpoint images.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
TECHNICAL FIELD

The present invention relates to an image generating apparatus and an image generating method for generating an image suitable for observation from an image obtained by imaging a subject.

BACKGROUND ART

In the field of pathology, there are virtual slide systems which capture and digitize an image of a test sample placed on a prepared slide to enable pathological diagnosis on a display as an alternative to optical microscopes that are pathological diagnostic tools. Digitization of a pathological diagnostic image by a virtual slide system enables a conventional optical microscopic image of a test sample to be handled as digital data. As a result, advantages such as expediting of remote diagnosis, explanation to patients using digital images, sharing of rare cases, and improved efficiency in teaching and learning can be achieved.

In addition, since a wide variety of image processing can be performed on digital data, various diagnosis supporting functions for supporting diagnosis performed by pathologists are being proposed with respect to images captured by virtual slide systems.

Conventionally, the following proposals have been made as examples of diagnosis supporting functions.

Non-Patent Literature 1 discloses a method of extracting cell membrane from a pathologic tissue sample image of a liver using digital image processing technology with an objective to calculate an N/C ratio (a ratio occupied by a nucleus relative to cytoplasm) which is an important finding for diagnosing cancer. In Non-Patent Literature 1, color information of three types of observation images, namely, a bright field observation image, a dark field observation image, and a phase difference observation image is combined to improve a correct extraction rate of cell membrane as compared to using only a bright-field observation image.

In addition, besides a cell membrane, clarifying a cell boundary (in addition to a cell membrane, an intercellular substance (interstice) exists on a cell boundary between cells) and a boundary between a cell and a tube or a cavity is very important when performing diagnoses. Since a clear boundary enables a doctor to more easily estimate a complicated three-dimensional structure of a liver from a sample, a more accurate diagnosis can be achieved from limited information.

Furthermore, the boundary between a cell and a tube or a cavity is also information that is useful for accurately calculating an N/C ratio. For example, since a pathologic tissue sample of a liver may be roughly divided into a region of a cell including a nucleus and cytoplasm and a region of sinusoids that are blood vessels for supplying substances to hepatocyte, the sinusoid region in which a cell does not exist must be correctly eliminated in order to calculate a correct N/C ratio.

CITATION LIST Patent Literature

  • [Patent Literature 1] Japanese Patent Application Laid-open No. 2007-128009

Non Patent Literature

  • [Non-Patent Literature 1] Namiko Torizawa, Masanobu Takahashi, and Masayuki Nakano, “Using Multi-imaging Technique for Cell Membrane Extraction in Hepatic Histologic Images”, IEICE General Conference, D-16-9, 2009/3
  • [Non-Patent Literature 2] Kazuya Kodama, Akira Kubota, “Virtual Bokeh Reconstruction from a Single System of Lenses”, The Journal of The Institute of Image Information and Television Engineers 65 (3), pp. 372-381, March 2011
  • [Non-Patent Literature 3] Kazuya Kodama, Akira Kubota, “Scene Refocusing Based on Linear Coupling on a Frequency Region”, Image Media Processing Symposium (IMPS 2012), 1-3.02, pp. 45-46, October 2012
  • [Non-Patent Literature 4] Kazuya KODAMA, Akira KUBOTA, “Efficient Reconstruction of All-in-Focus Images Through Shifted Pinholes from Multi-Focus Images for Dense Light Field Synthesis and Rendering, IEEE Trans. Image Processing, Vol. 22, Issue 11, 15 pages (2013-11)

SUMMARY OF INVENTION

However, the conventional art described above has the following problems.

In Non-Patent Literature 1, in order to acquire a bright field observation image, a dark field observation image, and a phase difference observation image, a phase difference objective lens and a common capacitor are mounted on a bright-field microscope and photography is performed by switching between the phase difference objective lens and the common capacitor. Therefore, there is a cost-related issue in that an optical microscope for bright-field observation requires additional parts and an issue of inconvenience in that photography requires optical systems and exposure conditions to be modified.

The present invention has been made in consideration of such problems, and an object thereof is to provide a novel technique for generating an image for observation suitable for observing and diagnosing a subject by image processing from an original image obtained by imaging the subject.

The present invention in its first aspect provides an image generating apparatus which generates an observation image from an original image obtained by imaging a subject, the image generating apparatus comprising: a viewpoint image generating unit configured to generate a plurality of viewpoint images having mutually different line-of-sight directions by using the original image; and an observation image generating unit configured to generate an image, for which a scattered light component included in the original image has been extracted or enhanced, as the observation image by using the plurality of viewpoint images.

The present invention in its second aspect provides an image generating apparatus which generates an observation image from an original image obtained by imaging a subject, the image generating apparatus comprising: a viewpoint image generating unit configured to generate a plurality of viewpoint images having mutually different line-of-sight directions by using the original image; and an observation image generating unit configured to generate an image, for which a difference among the plurality of viewpoint images has been extracted or enhanced, as the observation image by using the plurality of viewpoint images.

The present invention in its third aspect provides an image generating method of generating an observation image by using a computer from an original image obtained by imaging a subject, the image generating method comprising: generating a plurality of viewpoint images with respectively different line-of-sight directions by using the original image; and generating an image, for which a scattered light component included in the original image has been extracted or enhanced, as the observation image by using the plurality of viewpoint images.

The present invention in its fourth aspect provides an image generating method of generating an observation image by using a computer from an original image obtained by imaging a subject, the image generating method comprising: generating a plurality of viewpoint images with respectively different line-of-sight directions by using the original image; and generating an image, for which a difference among the plurality of viewpoint images has been extracted or enhanced, as the observation image by using the plurality of viewpoint images.

The present invention in its fifth aspect provides a program that causes a computer to execute the respective steps of the image generating method according to the present invention.

According to the present invention, an observation image suitable for observation and diagnosis of a subject can be generated by image processing from an original image obtained by imaging the subject.

Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 is a configuration diagram of an image generation and display system according to an embodiment of the present invention;

FIG. 2 shows a display example for explaining functions of an image display application;

FIG. 3 is a diagram showing an internal configuration of an image generating apparatus;

FIG. 4 is a diagram showing a prepared slide that is an example of a subject;

FIG. 5 is a diagram schematically showing a configuration of an image pickup apparatus for imaging a subject;

FIGS. 6A and 6B are schematic views for explaining a reason for contrast enhancement in a viewpoint image;

FIGS. 7A and 7B are diagrams showing an example of a GUI of a scattering image extracting function according to Example 1;

FIG. 8 is a flow chart showing an overall flow of scattering image extraction processing according to Example 1;

FIG. 9 is a flow chart showing viewpoint-decomposed scattering image extraction/synthesis processing S802 according to Example 1;

FIGS. 10A to 10C are diagrams showing examples of GUIs for various settings of the scattering image extracting function according to Example 1;

FIG. 11 is a flow chart showing details of viewpoint scattering image extraction processing S903 according to Example 1;

FIG. 12 is a flow chart showing a processing flow of N/C ratio calculation according to Example 1;

FIGS. 13A and 13B are schematic views showing a relationship between a polar angle of a viewpoint and an angle (observation angle) formed between a line-of-sight direction and an optical axis;

FIG. 14 is a schematic view showing unevenness existing on a surface of a pathological sample in a prepared slide;

FIGS. 15A to 15C are schematic views showing an intensity of scattered light at an observation angle φ on various planes shown in FIG. 14;

FIG. 16 is a flow chart showing details of the viewpoint scattering image extraction processing S903 according to Example 2;

FIG. 17 is a flow chart showing generation of a focus position and scattering composite image according to Example 3;

FIGS. 18A to 18F are schematic views showing a difference in Z stack images when objects exist at different Z positions;

FIGS. 19A to 19H are schematic viewS showing viewpoint images and out-of-focus blurs when objects exist at different Z positions;

FIGS. 20A to 20F are schematic views describing cancelation of an out-of-focus blur by addition of a viewpoint scattering synthesized image;

FIGS. 21A to 21F are schematic views describing cancelation of an out-of-focus blur by subtraction of a viewpoint scattering synthesized image;

FIG. 22 is a flow chart showing details of an image composition processing step S1704 according to Example 3;

FIGS. 23A and 23B are diagrams showing an example of a GUI of a setting screen according to Example 3;

FIG. 24 is a flow chart showing focus position and scattering composite image generation processing according to Example 4;

FIGS. 25A and 25B are sectional views of two viewpoint weighting functions whose relative intensity differs depending on an observation angle φ; and

FIGS. 26A and 26B show examples of a viewpoint weighting function for extracting scattered light information.

DESCRIPTION OF EMBODIMENTS Overall Configuration

FIG. 1 shows a configuration of an image generation and display system according to an embodiment of the present invention.

Connected to an image generating apparatus (a host computer) 100 are an input operation device 110 which accepts input from a user and a display 120 for presenting the user with images or the like outputted from the image generating apparatus 100. A keyboard 111, a mouse 112, a dedicated controller 113 (for example, a trackball or a touch pad) for improving operability of a user, and the like can be used as the input operation device 110. In addition, a storage device 130 such as a hard disk drive, an optical disk drive, or a flash memory and another computer system 140 that is accessible via a network I/F are connected to the image generating apparatus 100. Moreover, while FIG. 1 shows the storage device 130 existing outside of the image generating apparatus 100, the storage device 130 may alternatively be built into the image generating apparatus 100.

In accordance with a user's control signals inputted from the input operation device 110, the image generating apparatus 100 acquires image data from the storage device 130 and applies image processing to the image data to generate an observation image suitable for observation or to extract information necessary for diagnosis.

An image display application and an image generation program (both not shown) are computer programs that are executed by the image generating apparatus 100. These programs are stored in an internal storage device (not shown) inside the image generating apparatus 100 or in the storage device 130. Functions related to image generation (to be described later) are provided by the image generation program. The respective functions of the image generation program can be invoked (used) via the image display application. Processing results (for example, a generated observation image) of the image generation program are presented to the user via the image display application.

(Display Screen)

FIG. 2 shows an example of displaying image data of a specimen imaged in advance on the display 120 via the image display application.

FIG. 2 presents a basic configuration of a screen layout of the image display application. Arranged within a full window 201 of a display screen are an information area 202 that shows a display status and an operation status as well as information on various images, a thumbnail image 203 of a specimen that is an observation object, a display region 205 for detailed observation of specimen image data, and a display magnification 206 of the display region 205. A frame line 204 rendered on the thumbnail image 203 indicates a position and a size of a region displayed in enlargement in the display region 205 for detailed observation. Based on the thumbnail image 203 and the frame line 204, the user can readily comprehend which portion is being observed among the entire specimen image data.

An image displayed in the display region 205 for detailed observation can be set and updated by a movement operation or an enlargement/reduction operation performed using the input operation device 110. For example, movement can be realized by a drag operation of the mouse on the screen and enlargement/reduction can be realized by a rotation of a mouse wheel (for example, a forward rotation of the wheel may be assigned to enlargement and a backward rotation of the wheel may be assigned to reduction). In addition, switching to an image with a different focusing position can be realized by pressing a prescribed key (for example, the Ctrl key) and rotating the mouse wheel or the like at the same time (for example, a forward rotation of the wheel may be assigned to a transition to a deeper image and a backward rotation of the wheel may be assigned to a transition to a shallower image). The display region 205, the display magnification 206, and the frame line 204 inside the thumbnail image 203 are updated in accordance with a modification operation on the displayed image which is performed by the user as described above. In this manner, the user can observe an image with a desired intra-plane position, depth position, and magnification.

(Image Generating Apparatus)

FIG. 3 is a diagram showing an internal configuration of the image generating apparatus 100.

A CPU 301 controls the entire image generating apparatus using programs and data stored in a main memory 302. In addition, the CPU 301 performs various arithmetic processing and data processing such as viewpoint scattering image extraction processing and viewpoint scattering image synthesis processing which will be described in the examples below.

The main memory 302 includes an area for temporarily storing programs and data loaded from the storage device 130 and programs and data downloaded from the other computer system 140 via a network I/F (interface) 304. In addition, the main memory 302 includes a work area necessary for the CPU 301 to carry out various processing.

The input operation device 110 is constituted by a device capable of inputting various instructions to the CPU 301 such as the keyboard 102, the mouse 103, or the dedicated controller 113. The user uses the input operation device 110 to input information for controlling operations of the image generating apparatus 100. Reference numeral 305 denotes an I/O for notifying various instructions and the like inputted via the input operation device 110 to the CPU 301.

The storage device 130 is a large-capacity storage device such as a hard disk and stores an OS (operating system), programs and image data which enable the CPU 301 to execute processing described in the following examples, and the like. Writing of information into the storage device 130 and reading of information from the storage device 130 are performed via an I/O 306.

A display control apparatus 307 performs control processing to cause images, characters, and the like to be displayed on the display 120. The display 120 performs image display for prompting the user's input and displays images based on image data acquired from the storage device 130 or the other computer system 140 and processed by the CPU 301.

An arithmetic processing board 303 includes a processor in which specific arithmetic functions such as image processing have been enhanced and a buffer memory (not shown). While the following description assumes that the CPU 301 is used for various arithmetic processing and data processing and the main memory 302 is used as a memory region, a configuration using the processor and the buffer memory in the arithmetic processing board can also be adopted. Such a configuration also falls within the scope of the present invention.

(Subject)

FIG. 4 represents a prepared slide (also referred to as a slide) of a pathological sample that is an example of a subject. With the prepared slide of the pathological sample, a specimen 400 placed on a slide glass 410 is sealed by an encapsulating agent (not shown) and a cover glass 411 to be placed on top of the encapsulating agent. A size and thickness of the specimen 400 differ for each specimen. Furthermore, a label area 412 that records information regarding the specimen is also provided on the slide glass 410. Information may be recorded in the label area 412 manually using a pen or by printing a barcode or a two-dimensional code. In addition, a storage medium capable of storing information by an electric method, a magnetic method, or an optical method may be provided in the label area 412. The following embodiment will be described using an example in which the prepared slide of the pathological sample shown in FIG. 4 is used as a subject.

(Image Pickup Apparatus)

FIG. 5 schematically represents a part of a configuration of an image pickup apparatus which images the subject and acquires a digital image. As shown in FIG. 5, in the present embodiment, an x axis and a y axis are oriented parallel to a surface of the specimen 400 and a z axis is oriented in a depth direction of the specimen 400 (in an optical axis direction of an optical system).

A prepared slide (the specimen 400) is placed on a stage 502 and light is irradiated from an illuminating unit 501. Light transmitted through the specimen 400 is enlarged by an imaging optical system 503 and forms an image on a light-receiving surface of an image pickup sensor 504. The image pickup sensor 504 is a one-dimensional line sensor or a two-dimensional area sensor having a plurality of photoelectric conversion elements. An optical image of the specimen 400 is converted into an electric signal by the image pickup sensor 504 and outputted as digital data.

When an image of the entire specimen cannot be acquired in one shot, segmental image pickup is performed a plurality of times while moving the stage 502 in the x direction and/or the y direction and the plurality of obtained segmented images are composited (spliced) to generate an image of the entire specimen. In addition, by taking a plurality of shots while moving the stage 502 in the z direction, a plurality of images (referred to as layer images) with different focusing positions in the optical axis direction (a depth direction) are acquired. In the present description, a group of images made up of a plurality of layer images with different focusing positions in the optical axis direction (the depth direction) is referred to as a “Z stack image” or “Z stack image data”. In addition, a layer image or a Z stack image acquired by imaging the subject will be referred to as an “original image”.

A value of magnification that is displayed as the display magnification 206 shown in FIG. 2 is a product of a magnification of the imaging optical system 503 multiplied by an enlargement/reduction ratio on the image display application. Moreover, the magnification of the imaging optical system 503 may be fixed or varied by replacing objective lenses.

(Description of Techniques for Generating Viewpoint Image)

Instead of requiring an observation/image capturing method that modifies an optical system such as dark field observation and phase difference observation, the image generating apparatus 100 generates an intermediate image (a viewpoint image) from a Z stack image by image processing and generates an observation image suitable for observation and diagnosis using the intermediate image. First, techniques that can be used in processing for generating a viewpoint image as an intermediate image from a Z stack image will be described.

It is known that a viewpoint image observed from an arbitrary direction (an arbitrary viewpoint image) can be generated from a plurality of images (a Z stack image) imaged while varying focusing positions in the optical axis direction. In this case, a viewpoint image refers to an image that observes the subject from a prescribed observation direction (in other words, a viewpoint).

For example, Patent Literature 1 discloses a method of generating an image with an arbitrary viewpoint or an arbitrary blur from a group of out-of-focus blur images imaged while varying focusing positions. This method involves performing coordinate transform processing on a group of out-of-focus blur images so that a three-dimensional out-of-focus blur remains unchanged at an XYZ position and applying three-dimensional filter processing in an obtained orthogonal coordinate system (XYZ) in order to obtain an image with a modified viewpoint or a modified blur.

In addition, Non-Patent Literature 2 discloses an improvement of the method disclosed in Patent Literature 1. According to Non-Patent Literature 2, an integrated image is generated by obtaining a line-of-sight direction from a viewpoint and integrating a Z stack image in the line-of-sight direction, and an integrated image of a three-dimensional blur in the line-of-sight direction is generated in a similar manner. Subsequently, by subjecting the three-dimensional blur integrated image to inverse filter processing with respect to the integrated image of the Z stack image, an effect of a Z direction constraint (the number of layer images) is suppressed and a high-quality viewpoint image can be generated.

Furthermore, Non-Patent Literature 3 discloses a method of speeding up the calculation performed in Non-Patent Literature 2. With the method according to Non-Patent Literature 3, an arbitrary viewpoint image or an arbitrary blur image on a frequency region can be efficiently calculated by a linear coupling of a filter determined in advance independent of a subject (scene) and a Fourier transform image of a group of out-of-focus blur images at each Z position.

In the following description, methods of generating a viewpoint image observed from an arbitrary direction (an arbitrary viewpoint image) or generating an image having an arbitrary out-of-focus blur from a plurality of images (a Z stack image) taken while varying focusing positions in the optical axis direction will be collectively referred to as an MFI (multi-focus imaging) arbitrary viewpoint/out-of-focus blur image generating method.

Moreover, with a Z stack image taken while varying focusing positions using a microscope with a bilaterally telecentric optical system, a three-dimensional out-of-focus blur remains unchanged at an XYZ position. Therefore, when applying the MFI arbitrary viewpoint/out-of-focus blur image generating method to a Z stack image taken by a bilaterally telecentric optical system, coordinate transform processing and enlargement/reduction processing of an image that accompanies the coordinate transform processing need not be performed.

An image pickup apparatus is known which is capable of acquiring, by one imaging operation, an image on which is recorded four-dimensional information (information in which a degree of freedom of a viewpoint position is added to an XY two-dimensional image) that is referred to as a light field. Such an image pickup apparatus is referred to as a light field camera or a light field microscope. In such apparatuses, a lens array is disposed at an original position of an imaging plane and a light field is taken by an image sensor to the rear of the lens array. An image with an arbitrary focusing position or a viewpoint image observed from an arbitrary direction (an arbitrary viewpoint image) can also be generated using known techniques from an original image on which a light field is recorded.

In the present example, an image with an arbitrary observation direction that is generated by digital image processing from a captured image such as a Z stack image or a light field without physically changing a direction of the image pickup apparatus with respect to the subject will be referred to as a “viewpoint image”. The viewpoint image is an image which simulates an image formed on an imaging plane by a luminous flux centered on a main light beam that is an arbitrary light beam passing through an imaging optical system used to image the subject. A direction of the main light beam corresponds to the observation direction. The direction of the main light beam can be arbitrarily set. A magnitude (NA) of the luminous flux can also be arbitrarily set. When the objective is to perform image diagnosis or the like, a depth of field of the viewpoint image is desirably deep. Therefore, NA of the luminous flux with respect to the viewpoint image is desirably equal to or less than 0.1.

A viewpoint image generated (calculated) by digital image processing is not necessarily consistent with an image photographed by physically changing exposure conditions (an aperture position and/or an aperture size), the optical axis direction, lenses, or the like of the imaging optical system. However, even if the viewpoint image is not consistent with an actually photographed image, as long as the viewpoint image has features similar to those produced when observing the subject while varying viewpoints (in other words, as long as effects similar to varying observation directions can be imparted by digital image processing), the viewpoint image is useful for image observation, image diagnosis, and the like. Therefore, an image which is not exactly consistent with an image actually photographed by changing observation directions and the like but which is subjected to digital image processing so that features similar to an actually photographed image appear is also included in viewpoint images according to the present example.

According to Patent Literature 1, a viewpoint image observed through a pinhole at a position shifted by a viewpoint (x, y, z)=(s, t, 0) from an origin O (x, y, z)=(0, 0, 0) on a lens plane in a real space (corresponding to a pupil plane) can be generated from a group of out-of-focus blur images subjected to coordinate transform. With the MFI arbitrary viewpoint/out-of-focus blur image generating method, an observation direction from which the subject is observed or, in other words, a line-of-sight direction can be varied by changing a position of a viewpoint on the lens plane.

A line-of-sight direction can be defined as a gradient of a straight line that passes through a viewpoint position (x, y, z)=(s, t, 0) on the lens plane among a luminous flux emitted from a prescribed position of the subject corresponding to a formed image. A line-of-sight direction can be expressed in various ways. For example, an expression by a three-dimensional vector representing a traveling direction of the straight line may be adopted. Alternatively, an expression by an angle (observation angle) formed between the three-dimensional vector and an optical axis and an angle (polar angle) formed between the vector when projected on a plane perpendicular to the optical axis and the x axis may be adopted.

When the imaging optical system is not bilaterally telecentric, a three-dimensional out-of-focus blur on the imaging plane varies depending on a spatial position (a position in the xyz coordinate) of the subject in focus and a gradient of the straight line that passes through the viewpoint position (x, y, z)=(s, t, 0) on the lens plane is not constant. In this case, a line-of-sight direction is favorably defined on an orthogonal coordinate system (XYZ) after the coordinate transform described in Patent Literature 1, whereby a line-of-sight direction can be expressed by a vector (X, Y, Z)=(−s, −t, 1). Hereinafter, a method of obtaining a line-of-sight direction after coordinate transform will be described.

Patent Literature 1 describes that all light beams connecting arbitrary positions where the imaging optical system is in focus and a position (x, y, z)=(s, t, 0) of a same viewpoint on the lens plane of the image pickup apparatus (corresponding to the pupil plane) become light beams that are parallel to each other in the orthogonal coordinate system (XYZ) after coordinate transform. (refer to FIGS. 1 to 3 and descriptions thereof in Patent Literature 1)

Light exiting a point where the subject exists in a perspective coordinate system (a real space prior to coordinate transform) passes through (p+s, q+t, f) (where f denotes a focal distance) and is refracted at the viewpoint position (x, y, z)=(s, t, 0). This straight line may be represented by the following expression.

( x , y ) = z f × ( p , q ) + ( s , t ) ( where z > 0 ) [ Expression 1 ]

The straight line represented by Expression 1 may be represented by the following expression in the orthogonal coordinate system (XYZ) after coordinate transform.


(X,Y)=(p,q)+(1−Z)×(s,t) (where Z≧f)  [Expression 2]

Since substituting Z=0 (z=f) and Z=1 (z=∞) into Expression 2 respectively results in (X, Y, Z)=(p+s, q+t, 0) and (X, Y, Z)=(p, q, 1), the gradient of the straight line in the orthogonal coordinate system (X, Y, Z) may be represented by (−s, −t, 1).

Therefore, a vector representing a line-of-sight direction in the orthogonal coordinate system after coordinate transform is (X, Y, Z)=(−s, −t, 1).

Moreover, when the imaging optical system is bilaterally telecentric, a three-dimensional out-of-focus blur in a plurality of images (a Z stack image) taken while varying focusing points in a depth direction is unchanged regardless of the Z position.

Therefore, a coordinate transform for making a three-dimensional out-of-focus blur unchanged regardless of spatial positions is not required. A gradient (−s, −t, za) of a straight line connecting a prescribed position (x, y, z)=(0, 0, za) of the subject in focus in real space and a viewpoint position (x, y, z)=(s, t, 0) on the lens plane may be regarded, without modification, as a line-of-sight direction.

(Correspondence Between a Viewpoint and a Polar Angle θ and an Observation Angle φ when Actually Observing a Sample)

FIG. 13A is a schematic view representing a viewpoint position (x, y, z)=(s, t, 0) in real space, and FIG. 13B is a schematic view representing alight beam that passes through the viewpoint position (x, y, z)=(s, t, 0) in an orthogonal coordinate system (XYZ).

A dotted-line circle shown in FIG. 13A represents a range in which light beams can pass through on the lens plane (z=0). If the polar angle θ is defined as an angle formed between the viewpoint position (x, y, z)=(s, t, 0) on the lens plane and an x axis on the lens plane (z=0) or an angle formed between a straight line when a viewpoint (−s, −t, 1) is projected on an xy plane and the x axis, then the polar angle θ may be obtained by the following expression.

θ = tan - 1 ( t s ) [ Expression 3 ]

However, θ is adjusted to stay within a range of −180 to +180 degrees in accordance with signs of t and s.

Next, a relationship between a viewpoint and an observation angle φT on a transformed coordinate will be described with reference to FIG. 13B.

In FIG. 13B, a straight line represented by Expression 2 and a straight line obtained by substituting a point p=0, q=0 on the optical axis into Expression 2 are depicted by solid line arrows.

According to Patent Literature 1, Z=0 in the orthogonal coordinate system (XYZ) corresponds to z=f (or z=−∞) in the perspective coordinate system (xyz), and Z=1 corresponds to z=∞ (or z=−f). Therefore, FIG. 13B shows that a luminous flux from infinity (Z=1) in the orthogonal coordinate system (XYZ) has a spread on a focal plane (Z=0) in front of the lens plane. (Refer to FIG. 3 and a description thereof in Patent Literature 1)

At this point, if the observation angle φT on the transformed coordinate is defined as an angle formed between the viewpoint (−s, −t, 1) and the optical axis (Z axis), since the viewpoint is not dependent on a position of the subject as is apparent from FIG. 13B, the observation angle φT may be obtained by the following expression.


φT=tan−1(√{square root over (s2+t2)})  [Expression 4]

Moreover, the two dotted lines in FIG. 13B represent light beams that pass through outermost edges on the lens plane. If an aperture radius of the lens in the perspective coordinate system (xyz) prior to coordinate transform is denoted by ra, then a viewpoint image can only be calculated when the viewpoint position (x, y, z)=(s, t, 0) is within the radius ra.

Next, a polar angle θ and an observation angle φ corresponding to a viewpoint when actually observing a sample will be described.

According to Snell's law, when a light beam is incident to a boundary between different refractive indexes, a product of an incidence angle of the light beam and a refractive index of an incident-side medium is equal to a product of a refraction angle of the light beam and a refractive index of a refraction side medium. Since a refractive index of the sample is greater than a refractive index of air, an observation angle in the sample is smaller than an observation angle in air. Therefore, a three-dimensional out-of-focus blur in the sample which is constituted by refracted light beams is smaller than the three-dimensional out-of-focus blur in air. However, since a viewpoint position is calculated in the present example based on a three-dimensional imaging relationship between a sample and a three-dimensional out-of-focus blur in the sample, an effect of the refractive index of the sample need not be considered and the polar angle θ and the observation angle φ represent an observation direction in the sample without modification.

When the imaging optical system is bilaterally telecentric, since coordinate transform is not required, assuming that a sensor pixel pitch in the x direction and the y direction is the same, the observation angle φ may be represented by the following expression using the sensor pixel pitch Δx in the x direction and a movement interval Δz (μm) in the z direction.

φ = tan - 1 ( Δ x × s 2 + t 2 Δ z ) [ Expression 5 ]

Moreover, when the imaging optical system is not bilaterally telecentric, the observation angle φ may be obtained using a sensor pixel pitch ΔX in the X direction and a movement interval ΔZ in the Z direction in the orthogonal coordinate system (XYZ) in place of Δx and Δz in Expression 5.

This concludes the description of a polar angle θ and an observation angle φ corresponding to a viewpoint when actually observing a sample.

In the following description, a viewpoint position (x, y, z)=(s, t, 0) on the lens plane will be abbreviated as a viewpoint (s, t). In addition, since the following description will be given on the premise of image processing on the orthogonal coordinate system (XYZ), unless otherwise noted, only the viewpoint position (s, t) is to represent a position on a perspective coordinate system (a real space prior to coordinate transform) and other positions are to represent positions in the orthogonal coordinate system (XYZ).

By applying the method according to Patent Literature 1 on a Z stack image acquired by the image pickup apparatus shown in FIG. 5, a viewpoint image with a varied viewpoint position or, in other words, a varied observation direction can be generated.

A viewpoint image calculated by the method according to Patent Literature 1 has two main features. One feature is that the viewpoint image has a significantly deep (infinite) depth of field and boundaries between substances in the sample with different transmittance can be clearly seen. The other feature is that the viewpoint image resembles an observed image under oblique lighting which is obtained by illuminating the sample from a partial region of a lighting fixture, an unevenness which varies along the line-of-sight direction on the XY plane is enhanced, and a sample appears three-dimensional. With a viewpoint image, in a similar manner to an image created by oblique lighting, the greater the incline of a line-of-sight direction with respect to the optical axis or, in other words, the greater the observation angle φ of a line of sight, the higher the contrast of the unevenness on a sample surface and the more three-dimensional the appearance of the sample surface.

(However, there is a physical difference between an image created by oblique lighting and a viewpoint image. While an optical blur is created as a focus position is modified in an image created by oblique lighting, a viewpoint image differs in that a depth of field remains extremely deep regardless of a modification in a focus position. Moreover, while a viewpoint image varies in accordance with a Z position Zf of a Z stack image that is brought into focus, the variation is expressed by a translation in the XY direction).

Next, the reasons for the clear appearance of boundaries between substances with different transmittance which is the first feature will be described.

FIG. 6A is a diagram showing a three-dimensional out-of-focus blur of an optical system in the orthogonal coordinate system (XYZ). Reference numeral 600 represents a shape of the three-dimensional out-of-focus blur and shows how there is only a slight out-of-focus blur at a focusing position (apexes of the two cones) but the out-of-focus blur spreads as the Z position separates from the focusing position. Using the method according to Patent Literature 1, a viewpoint image constituted by light beams in an arbitrary line-of-sight direction (for example, a straight line 610) that passes inside the cone 600 from the Z stack image can be generated.

FIG. 6B shows a situation where a pathological sample (specimen) in the orthogonal coordinate system (XYZ) is seen from a different direction. A diagonal cavity 630 exists inside a sample 620 in FIG. 6B.

Since segments other than the cavity 630 are seen through when observed from a direction 631, a contrast of a wall surface of the cavity 630 is unclear. The same applies when observed from a direction 632 and the contrast of the cavity 630 remains unclear. However, when observed from a direction 633 along the wall surface of the cavity 630, since there are no effects from other segments, the contrast of the wall surface of the cavity 630 becomes clear. Moreover, a state of a relatively high contrast can be maintained even if the line-of-sight direction somewhat differs from the direction of the wall surface of the cavity.

On the other hand, in a Z stack image of the sample 620, since layer images at any Z position (focusing position) are affected by a multi-directional luminous flux including light beams in the directions 631 to 633, the contrast of the wall surface of the cavity does not become clearer than in the observation image from the direction 633. In addition to a cavity, this phenomenon also applies to a nucleus, a cell membrane, a fiber, and the like.

This concludes the description of a phenomenon in which boundaries between substances with different transmittance in the sample are clearly seen in a viewpoint image.

Since a pathological sample is a translucent object, there is scattered light in addition to transmitted light. The presence of the scattered light provides the second feature of a viewpoint image.

Next, a description will be given on the reasons for scattered light in a sample resulting in a higher contrast of the unevenness on the sample surface when an observation angle φ of a line of sight is greater.

Reference numeral 1400 in FIG. 14 represents a schematic view showing an unevenness existing on a surface of a pathological sample on a prepared slide. It is assumed that the unevenness on an xz plane shown in FIG. 14 also continues in a y direction that is a depth direction.

A pathological sample for tissue diagnosis is first fixed by paraffin, then sliced in a uniform thickness by a microtome, and finally stained. However, a pathological sample is not completely uniform. Unevenness attributable to tissue structure or components of substances exists at a boundary between a cell and a tube or a cavity, a boundary between a nucleus and cytoplasm, and the like, and an uneven structure such as that shown in FIG. 14 exists on a surface of the pathological sample.

(It should be noted that FIG. 14 presents a simplified model and an unevenness of an actual sample seldom includes a cusp such as that shown in FIG. 14. In addition, besides convex structures such as that shown in FIG. 14, there are also structures that are recessed toward the inside of a sample. Furthermore, since an optical distance varies when a substance with a different refractive index exists inside a sample even when the surface is smooth, a discontinuity in the refractive index inside a sample can be considered as a surface unevenness).

Moreover, with a real prepared slide, a transparent encapsulating agent is present between a cover glass and a sample. However, since a difference between a refractive index of an encapsulating agent and a refractive index of a sample is very small and does not have a significant impact, both refractive indexes will be assumed to be the same in the following description.

In FIG. 14, reference numeral 1411 denotes a plane with no unevenness, reference numeral 1412 denotes an inclined plane rising to the right, and reference numeral 1413 denotes an inclined plane dropping to the right. Inclination angles formed between the inclined planes 1412 and 1413 and the x axis are respectively α (α>0).

FIG. 15 is a schematic view showing intensity of scattered light at an observation angle φ on the planes 1411 to 1413 in FIG. 14. FIGS. 15A, 15B, and 15C respectively represent scattering of light by the plane 1411 and the inclined planes 1412 and 1413. A circle circumscribing each plane represents intensity of scattered light in a scattering direction when the sample surface is assumed to be a perfect diffusion/transmission plane in terms of light-diffusing characteristics. A solid arrow line in the circle represents intensity of scattered light when observed from an angle that is inclined by φ from the optical axis (Z axis). (Although an actual sample surface is not a perfect diffusion/transmission plane and has an intensity dependency in accordance with an incidence direction and/or an observation direction of light, the sample surface will be assumed to be a perfect diffusion/transmission plane for the sake of simplicity).

With a perfect diffusion/transmission plane, if intensity of light in a normal direction that is perpendicular to the plane is denoted by I0 and an angle formed between an observation direction and a normal of the plane is denoted by δ, then intensity I(δ) of scattered light in a δ direction is expressed as I(δ)=I0 cos δ.

In FIGS. 15A, 15B, and 15C, since angles δ formed between an observation direction and a normal of the planes may be respectively expressed as φ, φ+α, and φ−α, intensities of the respective scattered light may be expressed as


I0 cos φ,I0 cos(φ+α), and I0 cos(φ−α).

Moreover, if the inclination angle α is assumed to be positive at an inclined plane whose value of Z increases when seen from the observation direction (a rising inclined plane) and the inclination angle α is assumed to be negative at an inclined plane whose value of Z decreases (a dropping inclined plane), the intensity of scattered light can be expressed as I0 cos(φ−α) for both planes.

If a value obtained by dividing the intensity of scattered light in a direction of the observation angle φ of the inclined planes 1412 and 1413 with the intensity of scattered light in a direction of the observation angle φ of the plane 2111 is defined as a contrast C (φ, α), then the contrast may be represented by the following expression.

C ( φ , α ) = I 0 cos ( φ - α ) - I 0 cos ( φ + α ) I 0 cos φ = 2 tan φ sin α [ Expression 6 ]

Values of the contrast C (φ, α) when φ and a are varied are shown in Table 1.

TABLE 1 Inclined Observation plane angle inclination Contrast φ [deg] angle C(φ, α) 0 1 0.0000 0 5 0.0000 0 10 0.0000 0 20 0.0000 10 1 0.0062 10 5 0.0307 10 10 0.0612 10 20 0.1206 20 1 0.0127 20 5 0.0634 20 10 0.1264 20 20 0.2490 30 1 0.0202 30 5 0.1006 30 10 0.2005 30 20 0.3949

Table 1 shows that when the observation angle φ is small, a contrast between the inclined planes 1412 and 1413 is low and is difficult to observe even when the inclination angle α is large, and as the observation angle φ increases, the contrast increases and is more easily observed even when the inclination angle α is small.

(Variation in Scattered Light Intensity when Viewpoint is Varied)

Next, a variation in scattered light intensity on a sample surface when a viewpoint is varied will be described.

FIG. 14 is a diagram showing a case where a direction perpendicular to an edge of a surface unevenness (a direction of brightness variation) and a polar angle θ (θ=0) of a viewpoint are consistent with each other. If an angle formed between a direction perpendicular to the edge of the surface unevenness and the x axis is assumed to be an unevenness direction angle β, when the unevenness direction angle β and the polar angle θ are not consistent with each other, the surface unevenness 1400 is to be observed from an oblique direction. In this case, an apparent inclination angle α′ of the inclined plane 1413 and the inclined plane 1412 in a direction opposite to the inclined plane 2113 as viewed from an observation direction having an angle expressed as polar angle θ−β can be calculated by

tan α = tan α 1 + tan ( θ - β ) [ Expression 7 ]

Expression 7 shows that the apparent inclination angle α′ is smaller than α and that the contrast C declines in accordance with a difference |θ−β| between the unevenness direction angle β and the polar angle θ. Note that signs of inclination angles α and α′ are switched from positive to negative or from negative to positive when |θ−β| takes an angle of 90 degrees. This corresponds to an interchange between an upward inclined plane and a downward inclined plane depending on observation directions when a polar angle of a viewpoint is varied. The inclination angle α′ has a range of −α≧+α. For example, on the inclined plane 1413, when the viewpoint polar angle satisfies |θ−β|=0, α′=α (an upward inclined plane as viewed in the observation direction) holds true, and when the inclined plane) holds true.

Next, scattered light normalized intensity V (φ, α) calculated by normalizing intensity of scattered light on the inclined plane 1413 as viewed in a direction of a viewpoint polar angle of θ−β by intensity of light observed on the plane 1411 will be considered. Let the intensity of light observed on the plane 1411 be denoted by Its(φ) that is a function of the observation angle φ constituted by a sum of intensity of transmitted light and intensity of scattered light. From the above, the scattered light normalized intensity V (φ, α) may be represented by the expression below.

V ( φ , α ) = I 0 cos ( φ - α ) I ts ( φ ) = A ( φ ) × cos α + B ( φ ) × sin α [ Expression 8 ] where A ( φ ) = I 0 cos φ I ts ( φ ) , B ( φ ) = I 0 sin φ I ts ( φ )

Since an inclination angle α of surface unevenness can be estimated to be sufficiently small in the case of a pathological sample, Expression 8 can be approximated as follows using approximate expressions cos α=1 and sin α=α.


V(φ,α)≅A(φ)+α×B(φ)  [Expression 9]

Subsequently, values taken by A(φ) and B(φ) will be described.

Generally, since the intensity Its(φ) of light tends to decrease in accordance with the observation angle φ due to properties of an illuminating optical system, if an approximation of Its(φ)=I1 cos φ is to be adopted, then A(φ)=I0/I1 holds true. When the scattered light is smaller than the transmitted light, A(φ) can be regarded as a relatively small constant. On the other hand, with respect to B(φ), when φ=0, B(φ)=0 holds true since sin φ=0. Since it can be regarded that, as the observation angle φ increases, sin φ increases while the intensity Its (φ) of light decreases, B(φ) is an increasing function. Assuming that Its (φ)=I1 cos φ, then B(φ)=I0/I1×tan φ holds true.

With a viewpoint image calculated by the method according to Patent Literature 1, due to frequency filter processing intended to maintain an average, an average brightness of an image does not vary regardless of the viewpoint. Therefore, a value obtained by subtracting transmitted light intensity affected by transmittance of a stained segment from brightness of the inclined plane 1413 in a viewpoint image can be considered approximately equal to the normalized intensity (φ, α).

(Variation in Transmitted Light Intensity when Viewpoint is Varied)

Next, a variation in transmitted light intensity in viewpoint images with different viewpoints will be considered.

In an observed sample, in a region where a brightness difference from an adjacent substance is small (cytoplasm or a cell boundary), the brightness difference remains small even when line-of-sight directions differ and intensity of transmitted light is hardly affected. In addition, with a relatively thin sample having a thickness of around 4 μm such as a tissue diagnostic pathological sample, positions of objects inside the sample which are observed in a viewpoint image calculated by bringing the sample surface into focus do not vary significantly. (With a viewpoint image calculated by the method according to Patent Literature 1, objects present in a vicinity of a Z position Zf in a Z stack image brought into focus appear at approximately a same XY position on a viewpoint image regardless of the viewpoint. This can also be appreciated from the fact that the absence of a difference in XY positions in an image synthesizing viewpoint images with a plurality of viewpoints results in less blur.)

Therefore, a difference in transmitted light intensity between viewpoint images obtained by varying viewpoint positions can be regarded very small, and a difference between scattered light normalized intensities and a difference between viewpoint images can be considered approximately equal to one another.

(Extraction of Information on Scattered Light on a Sample Surface from Viewpoint Images with Varied Viewpoint Polar Angles θ)

Next, extraction of information on scattered light on a sample surface by computing viewpoint images with varied polar angles θ will be considered.

With a prescribed surface unevenness having an inclination angle α denoted by reference numeral 1413 in FIG. 14, from Expression 7, if a difference |θ−β| between an unevenness direction angle β and a polar angle θ is 0 degrees, then the inclination angle is α and the scattered light normalized intensity V(φ,α) is maximized. On the other hand, when |θ−β| is 180 degrees, then an apparent inclination angle α′ equals −α and the scattered light normalized intensity V(φ,α′) is minimized.

Therefore, it is shown that, by performing subtraction between a viewpoint image for which |θ−β| is 0 and a viewpoint image for which |θ−β| is 180 degrees, (part of) information on the scattered light on a sample surface can be efficiently extracted. An expression thereof is given below.


V(φ,α)−V(φ,α′)≅(α−α′)×B(φ)=2α×B(φ)  [Expression 10]

(where α′=−α when |θ−β|=π)

Since the inclination angle α and the unevenness direction angle β of a surface unevenness of a sample may take various values, it is shown that, by obtaining differences between viewpoint images with viewpoints that differ from one another by a polar angle θ of 180 degrees for various viewpoints of varying polar angles θ and collecting results thereof, information on scattered light on a sample surface can be extracted.

Moreover, Expression 10 shows that information on scattered light can also be extracted from a computation between viewpoint images other than those with viewpoints that differ from one another by a polar angle θ of 180 degrees. For example, since α′=0 when |θ−β|=π/2, α×B(φ) can be extracted from a difference between viewpoint images.

In addition, besides subtraction, information on scattered light on a sample surface can also be extracted by division. In other words, since a value of V(φ,α)/V(φ,α′) also varies in accordance with the inclination angle α, V(φ,α)/V(φ,α′) may also be considered an index that represents information on scattered light.

(Extraction of Information on Scattered Light on a Sample Surface from Viewpoint Images with Varied Viewpoint Observation Angles θ)

Next, extraction of information on scattered light on a sample surface by computing viewpoint images with varied observation angles φ in a similar manner to polar angles θ described above will be considered.

A subtraction between two viewpoint images respectively having observation angles of φ and φ′ can be represented by the expression below (where an approximation of A(φ)=A(φ′) is adopted).


V(φ,α)−V(φ′,α)≅α×(B(φ)−B(φ))  [Expression 11]

As described earlier, since B(φ) is a function that increases as φ increases, it is shown that, by performing a computation between two viewpoint images with different observation angles φ, (part of) information on the scattered light on a sample surface can be extracted. Since B(φ) takes a value of 0 when φ=0, a subtraction between a viewpoint image with an observation angle φ and a viewpoint image with a viewpoint at which an observation angle φ′ takes a value of 0 is most effective. In other words, when extracting scattered light from two viewpoint images, preferably, one viewpoint image has an observation angle with a magnitude of 0 degrees and the other viewpoint image has an observation angle with a magnitude other than 0 degrees.

Since the unevenness direction angle β in a sample may take various values in a similar manner to the case of the polar angle θ described earlier, it is shown that, by obtaining differences between viewpoint images with viewpoints having varying polar angles θ in addition to varying observation angles φ and collecting results thereof, information on scattered light on a sample surface can be extracted.

In addition, besides subtraction, information on scattered light on a sample surface can also be extracted by division. From the description so far, since a brightness of a viewpoint image at an observation angle φ can be approximated by α×B (φ)+D (where D is a constant), a division of brightness between viewpoint images with varied observation angles φ can be represented by

DIV ( φ , φ ) α × B ( φ ) + D α × B ( φ ) + D [ Expression 12 ]

DIV(φ,φ′) takes a value of 1 when α is 0, and when α is not 0, intensity varies in accordance with a magnitude of α. This means that information on the inclination angle α can also be extracted by division DIV(φ,φ′).

In a similar manner to the case of subtraction, by performing divisions between viewpoint images with viewpoints having varying polar angles θ in addition to varying observation angles φ and collecting results thereof, information on scattered light on a sample surface can be extracted.

An image (a microscopic image) photographed by a transmission microscope as is the case of the present example includes a transmitted light component created by light transmitted through the sample and a scattered light component created by light scattered by a surface or the like of the sample. Since intensity of the transmitted light component is dependent on transmittance of light through the sample, the intensity of the transmitted light component represents a difference in colors or a difference in refractive indexes in the sample. On the other hand, as described earlier, intensity of the scattered light component is mainly dependent on unevenness (a surface profile) of the sample surface. Although the scattered light component is hardly recognizable in a microscopic image as an original image due to the transmitted light component being dominant, by extracting or enhancing a difference among a plurality of viewpoint images with different observation directions as described above, the scattered light component can be extracted or enhanced. In other words, a “difference” and a “ratio” between two viewpoint images with different observation directions can be regarded as a feature amount that represents the scattered light component (information on scattered light) included in the original image.

Moreover, while “enhancement” refers to an operation for making a given portion more prominent (than its original state) and “extraction” refers to an operation for extracting only a given portion, since the operations are similar in terms of focusing on a given portion, the two terms may sometimes be used without any distinction in the present specification. In addition, an operation of enhancing a scattered light component not only includes an operation of increasing intensity of the scattered light component in an image but also includes an operation of relatively increasing intensity of the scattered light component by reducing intensity of a transmitted light component in the image. Subsequently, an image obtained by extracting or enhancing a scattered light component (information on scattered light) included in an image will be referred to as a scattering image and an operation of generating a scattering image from an original image will be referred to as an extraction of a scattering image. Moreover, while subtraction and division have been exemplified as computations for extracting or enhancing a difference between viewpoint images, any kind of computation may be used as long as a difference between images can be enhanced.

Scattered images are expected to be useful as observation images suitable for observation of unevenness (a surface profile) of a sample surface and for diagnosis based on the observation. For example, even if there is unevenness on a surface in a given region inside a sample, when the transmittance in the region is approximately uniform, brightness or color of the region is uniform in the original image and the unevenness cannot be visually recognized. A scattering image is particularly effective for visualizing such surface unevenness (a surface unevenness that is not manifested as a variation in brightness or color). In addition, since unevenness also exists at a cell boundary or at a boundary between a cell and a sinusoid, a scattering image is also effective for clarifying such boundaries. Moreover, although processes for extracting or enhancing an image feature include edge extraction (enhancement), surface unevenness that is not manifested as a variation in brightness or color in an image cannot be visualized by edge extraction (enhancement). Therefore, favorably, scattering image extraction and edge extraction are selectively used depending on what kind of structure or feature of a sample is to be observed.

Hereinafter, specific examples of the image generating apparatus 100 will be described.

Example 1 Scattered Image Extraction Setting Screen

FIGS. 7A and 7B show examples of setting screens of a scattering image extracting function according to Example 1.

After selecting a region 207 in a displayed image in the image display application shown in FIG. 2 using a mouse, an item named “viewpoint-decomposed scattering image extraction” (not shown) is selected from an extensions menu 208 that is displayed by a right click of the mouse. In response thereto, a new window 700 (FIG. 7A) showing images before and after the scattering image extraction processing and a scattering image extraction processing setting screen 703 (FIG. 7B) are displayed. The image in the region 207 is displayed in a left-side region 701 of the window 700 and an image resulting from the scattering image extraction processing is displayed in a right-side region 702 of the window 700.

The setting screen 703 is operated when modifying settings of the scattering image extracting function. When the user presses a viewpoint decomposition setting button 704 with the mouse, a setting screen for determining a direction (a three-dimensional observation direction) of the viewpoint image used for scattering image extraction is displayed. Moreover, there may be one or a plurality of viewpoints. Details will be given later. When the user presses a viewpoint scattering image extraction setting button 705, a viewpoint scattering image extraction setting screen for setting a method or parameters for extracting a scattering image from a viewpoint image is displayed. Various methods can be selected as a method of extracting a scattering image. Details of these methods will be given later. When the user presses a viewpoint scattering image synthesis setting button 706, a setting screen for generating an image (hereinafter, referred to as a viewpoint scattering synthesized image) which synthesizes images (hereinafter, referred to as viewpoint scattering image or viewpoint scattering extracted images) representing a scattering image extracted from the viewpoint image is displayed. At this point, setting of weighting on each viewpoint scattering extracted image is configured. In addition, if necessary, setting of a noise elimination parameter and the like after synthesizing the viewpoint scattering extracted images can be optionally configured. Details will be given later. An overlaid display 707 is a check box. By enabling this setting, the image in the selection region 207 and a scattering extracted image are displayed overlaid on each other in the right-side region 702. When the user configures the settings described above as necessary and then presses an execute button 708, a viewpoint image is generated, a scattering image is extracted, and a processing result is displayed. Details will be given later.

Reference numeral 710 denotes an extensions menu that can be called by right-clicking inside the window 700. Items for image analysis such as N/C ratio calculation (not shown) are lined up in the extensions menu 710. By selecting an item, an image analysis processing setting screen (not shown) is displayed, analysis processing is executed on a selection region in the window or on the entire window and a processing result is displayed. Details will be given later.

(Scattered Image Extraction Processing)

FIG. 8 shows a flow of scattering image extraction processing that is executed when the execute button 708 described above is pressed. This processing is realized by the image display application and the image generation program that is invoked from the image display application.

In a Z stack image acquiring step S801, based on coordinates of the image selection region 207 displayed by the image display application, data of a necessary range is acquired from a Z stack image stored in the main memory 302 or the storage device 130. Alternatively, when the Z stack image exists in the other computer system 140, data is acquired through the network I/F 304 and stored in the main memory 302.

Subsequently, in a viewpoint-decomposed scattering image extraction/synthesis processing step S802, based on information on a viewpoint which determines a line-of-sight direction with respect to the subject (an observation direction), viewpoint images corresponding to a plurality of viewpoints are generated from the Z stack image (this operation is also referred to as a decomposition to viewpoint images). In addition, a scattering image is extracted from each viewpoint image to generate a viewpoint scattering extracted image, and the viewpoint scattering extracted images are synthesized to generate a viewpoint scattering synthesized image. Details will be given later.

Next, in a contour extraction processing step S803, a contour extracted image which represents a contour extracted from the viewpoint scattering synthesized image is generated. It should be noted that the processing of step S803 is not essential and whether or not to apply the processing of step S803 can be modified according to settings (not shown). Details will be given later.

Finally, in an image display processing step S804, the contour extracted image, the viewpoint scattering extracted image, or the viewpoint scattering synthesized image is enlarged/reduced in accordance with a display magnification of the image display application and displayed in the right-side region 702. When the overlaid display 707 is enabled, the contour extracted image, the viewpoint scattering extracted image, or the viewpoint scattering synthesized image is displayed overlaid on the image in the selection region 207. In doing so, an image to which the viewpoint scattering extracted image or the viewpoint scattering synthesized image of a corresponding position is combined (added or subtracted) to the image in the selection region 207 may be displayed in the right-side region 702. Furthermore, a combined image obtained by performing tone correction on the added image so that brightness approximates that of the image in the selection region 207 may be displayed in the right-side region 702. An animation display which switches among the plurality of viewpoint scattering extracted images at a constant time interval may be performed. In this case, the contour extracted image, the viewpoint scattering extracted image, or the viewpoint scattering synthesized image may be displayed in a different color for each channel (RGB) or may be changed to another color that differs from the color of the sample. The images used for display in this case (the contour extracted image, the viewpoint scattering extracted image, the viewpoint scattering synthesized image, and an image obtained by compositing these images with the original image) are all observation images suitable for image observation and image diagnosis.

(Viewpoint-Decomposed Scattering Image Extraction/Synthesis Processing)

FIG. 9 is a flow chart showing internal processing of the viewpoint-decomposed scattering image extraction/synthesis processing S802.

First, in a viewpoint acquisition processing step S901, positional information of a viewpoint necessary for generating a viewpoint image in a subsequent step S902 is acquired. In step S901, positional information of a viewpoint determined in advance may be acquired from the main memory 302, the storage device 130, or the other computer system 140. Alternatively, in step S901, positional information of a viewpoint may be obtained by calculation based on information set on the image display application. Details will be given later.

Subsequently, in a viewpoint image generation step S902, a viewpoint image corresponding to the viewpoint obtained in step S901 is generated based on the Z stack image of the selection region 207 acquired in step S801. Moreover, as a method of generating an arbitrary viewpoint image from a Z stack image (an MFI arbitrary viewpoint/out-of-focus blur image generating method), any method including the methods according to Patent Literature 1 and Non-Patent Literature 2, 3 and 4 may be used.

Next, in a viewpoint scattering image extraction processing step S903, scattering image extraction processing is performed on the generated viewpoint image based on the viewpoint scattering image extraction setting (705). When there are a plurality of viewpoints, the viewpoint scattering image extraction processing is executed for each of the viewpoints. Details will be given later. Subsequently, in a viewpoint scattering synthesized image generating step S904, the plurality of viewpoint scattering extracted images generated in step S903 are composited based on the viewpoint scattering image synthesis setting (706) and a viewpoint scattering synthesized image is generated. A function represented by steps S903 to S904 that is executed by the image generating apparatus 100 (the CPU 301) will be referred to as an observation image generating unit. Details will be given later.

Hereinafter, details of the viewpoint acquisition processing step S901, the viewpoint scattering image extraction processing step S903, and the viewpoint scattering synthesized image generating step S904 will be described.

(Viewpoint Acquisition Processing Step S901)

Hereinafter, a case in which positional information of a viewpoint is calculated in the viewpoint acquisition processing step S901 based on the viewpoint decomposition setting (704) will be described.

A viewpoint decomposition setting screen 1001 shown in FIG. 10A is an example of a setting screen that is displayed when the viewpoint decomposition setting button 704 is pressed. In this case, a viewpoint position of a viewpoint image used for scattering image extraction is set.

The setting screen 1001 offers two options as viewpoint setting methods: direct setting and mesh setting. In direct setting, the number of viewpoints and a viewpoint position (s, t) are directly specified by the user. On the other hand, in mesh setting, an outer diameter, an inner diameter (center shielded), and a discretizing step are specified by the user, and a position of each viewpoint is calculated from the specified values.

A maximum deviation of a calculated viewpoint is specified for “outer diameter” and a minimum deviation of a calculated viewpoint (in other words, a maximum deviation of a viewpoint not calculated) is specified for “inner diameter (center shielded)”. In this case, values of the outer diameter and the inner diameter (center shielded) are set according to distances (radii) centered on an origin on the lens plane. Moreover, a value exceeding a radius ra of the optical system on the lens plane cannot be set as the outer diameter. “Discretizing step” is an increment interval for discretely setting positions of viewpoints for which viewpoint images are generated within a donut-shaped region created by subtracting a circle defined by the “inner diameter” from a circle defined by the “outer diameter”. The finer the discretizing step, the larger the number of viewpoints to be calculated.

Moreover, various shapes can be set in addition to the circles described above. For example, a plurality of concentric circles with different radii or straight lines radially extending from a center can be set. When concentric circles are set, a discretizing step (for example, an angular interval setting) that determines a radius of each circle or a density of viewpoints on each circle can be set. In addition, in the case of straight lines radially extending from a center, a discretizing step that determines an interval of lines (for example, an angular interval setting) or a density of viewpoints on the radial lines can be set.

(Viewpoint Scattering Image Extraction Processing Step S903)

A viewpoint scattering image extraction setting screen 1002 shown in FIG. 10B is an example of a setting screen that is displayed when the viewpoint scattering image extraction setting button 705 is pressed. In this case, a scattering image extracting method or parameters used when performing scattering image extraction from a viewpoint image are set.

In a method field on the setting screen 1002, an extraction method of a scattering image to be used in the viewpoint scattering image extraction processing step S903 can be selected. In addition, noise elimination setting can be configured on the viewpoint scattering image extraction setting screen 1002. Binarization by a threshold, a median filter, a bilateral filter which enables noise elimination while retaining a scattering image, or the like can be applied as the noise elimination setting. Due to this process, a scattering image with clearer contrast can be extracted and an N/C ratio can be more readily detected.

Next, an extraction method of a scattering image for each viewpoint in the viewpoint scattering image extraction processing step S903 will be described.

As already described with reference to Expression 10, by performing a subtraction between a viewpoint image with a prescribed viewpoint (s, t) and a viewpoint image with a viewpoint having a same observation angle φ but a polar angle θ that differs by 180 degrees, information related to grayscale due to a difference in transmittance of the sample is canceled and a scattering image can be extracted efficiently.

FIG. 11 is a flow chart showing internal processing of the viewpoint scattering image extraction processing S903 according to the present example. Hereinafter, an extraction method of a scattering image of each viewpoint image will be described with reference to FIG. 11.

First, in a polar angle-rotated viewpoint calculating step S1101, with respect to a viewpoint that is a processing object, a polar angle-rotated viewpoint at a position where a polar angle θ is rotated by 180 degrees is calculated. Moreover, since 180 degrees is most favorable as a rotation angle, a case of 180 degrees will be described below.

In a coordinate system shown in FIG. 13A, if a coordinate of a viewpoint P0 that is a processing object is expressed by (x, y)=(sp, tp), then a polar angle-rotated viewpoint P1 rotated by 180 degrees is expressed by (x, y)=(−sp, −tp).

Next, in a polar angle-rotated viewpoint image generating step S1102, a viewpoint image observed from the polar angle-rotated viewpoint P1 calculated in step S1101 is generated. Since a method of generating a viewpoint image has already been described in the viewpoint image generating step S902, a description thereof will be omitted. Moreover, when the polar angle-rotated viewpoint P1 has already been calculated in the viewpoint image generating step S902, data existing in the storage device 130 or the main memory 302 of the image generating apparatus 100 is to be read out and used instead of recalculating the polar angle-rotated viewpoint P1.

Next, in a viewpoint scattering image generating step S1103, image computation is performed between a viewpoint image at the viewpoint P0 that is a processing object and a viewpoint image at the polar angle-rotated viewpoint P1, and information on scattered light at the sample surface that is observed in an observation direction from the viewpoint P0 is extracted and outputted as a viewpoint scattering image.

Hereinafter, a computation method used in step S1103 will be described.

If a viewpoint image observed in a line-of-sight direction from the viewpoint P0 so that a position expressed as Z=Zf becomes a focusing position (a first viewpoint image) is denoted by IP0 (X, Y, Zf), a viewpoint image observed in a line-of-sight direction from the polar angle-rotated viewpoint P1 so that a position expressed as Z=Zf becomes a focusing position (a second viewpoint image) is denoted by IP1 (X, Y, Zf), and the viewpoint scattering image (a first observation image) is denoted by SP0 (X, Y, Zf), then the computation is represented by the expression below.


SP0(X,Y,Zf)=|D1(X,Y,Zf)|


D1(X,Y,Zf)=IP1(X,Y,Zf)−IP0(X,Y,Zf)  [Expression 13]

The reason why information on scattered light at the sample surface can be extracted by the operational expression represented by Expression 13 will now be described. As described earlier, when a viewpoint is varied, while an intensity variation of transmitted light of a viewpoint image is small, a significant brightness variation due to scattered light at the sample surface occurs in accordance with the polar angle θ. Therefore, a subtraction between viewpoint images with varying viewpoints cancels a transmitted light component in an image and enables a scattered light component (information on scattered light) to be extracted. The transmitted light component need not necessarily be canceled completely. If an image can be obtained in which a scattered light component in the image is relatively enhanced by reducing intensity of a transmitted light component in the image, such an operation can also be considered an extraction of a scattering image.

When a polar angle rotation angle is 180 degrees, scattered light of surface unevenness having an unevenness direction angle that produces maximum scattered light at the polar angle of the viewpoint P0 is minimized at the polar angle of the viewpoint P1. In a similar manner, scattered light of surface unevenness having an unevenness direction angle that produces maximum scattered light at the polar angle of the viewpoint P1 is minimized at the polar angle of the viewpoint P0.

Therefore, by obtaining a difference D1 (X, Y, Zf) between a viewpoint image of the viewpoint P0 and a viewpoint image of the viewpoint P1, information on scattered light of surface unevenness having an unevenness direction angle that maximizes scattered light at polar angles of viewpoints P0 and P1 can be extracted as an image.

In the viewpoint scattering image generating step S1103, SP0 (X, Y, Zf) is outputted as a viewpoint scattering image (a viewpoint scattering extracted image) at the viewpoint P0 that is the processing object.

In Expression 13, an absolute value of the difference D1 (X, Y, Zf) is used in the calculation of the viewpoint scattering image SP0 (X, Y, Zf) in order to impart nonlinearity to a function for computing a viewpoint scattering image from a viewpoint image. When an absolute value is not calculated, an operation involving obtaining differences between viewpoint images at various viewpoints and subsequently calculating a sum of the differences in step S904 is equivalent to an operation of calculating a sum of viewpoint images at various viewpoints and subsequently calculating a difference. In such a case, the viewpoint images cancel out one another and an obtained synthesized image is substantially 0. Therefore, with the viewpoint scattering image SP0 (X, Y, Zf), a nonlinear function is used to prevent the canceling-out described above.

Moreover, it is assumed that a nonlinear function refers to a function characterized by a result of applying the function to prescribed images (viewpoint images) and subsequently adding up a plurality of function-applied images not being consistent with a result of adding up a plurality of prescribed images (viewpoint images) and subsequently applying the function.


Σi=1NS(D(i))≠Si=1ND(i)),  [Expression 14]

where N denotes the number of viewpoints, D(i) denotes a difference image between a viewpoint image and a polar angle-rotated viewpoint image at a viewpoint i, and S ( ) denotes the function.

Moreover, Expression 13 is simply an example of a computation between viewpoint images for generating a scattering image and various other computations can be used.

For example, when obtaining polar angle-rotated viewpoints by rotating a viewpoint polar angle θ by ±90 degrees in the polar angle-rotated viewpoint calculating step S1101, two polar angle-rotated viewpoints are obtained which are respectively in a positive direction and a negative direction and which can respectively be set as P1=(−tp, sp) and P2=(tp, −sp). In this case, in the viewpoint scattering image generating step S1103, a viewpoint scattering image may be generated using the following in place of Expression 13.


SP0(X,Y,Zf)=(|D1(X,Y,Zf)|+|D2(X,Y,Zf)|)/2


D1(X,Y,Zf)=IP1(X,Y,Zf)−IP0(X,Y,Zf)


D2(X,Y,Zf)=IP2(X,Y,Zf)−IP0(X,Y,Zf)  [Expression 15]

In addition to the above, various nonlinear functions can be used as SP0 (X, Y, Zf). For example, a maximum value of corresponding pixels may be selected from respective absolute values of D1 (X, Y, Zf) and D2 (X, Y, Zf). Alternatively, as described earlier, a division (ratio) between IP0 (X, Y, Zf) and IP2 (X, Y, Zf) may be used instead of a difference between IP0 (X, Y, Zf) and IP2 (X, Y, Zf).

In addition, Expression 15 can also be applied to rotation angles other than 90 degrees, in which case rotation angles with angles that differ in the positive direction and the negative direction may be selected. Furthermore, a viewpoint scattering image can also be selected using two or more polar angle-rotated viewpoints. The user can select a rotation angle of a polar angle-rotated viewpoint or a computation method of a viewpoint scattering image on the setting screen 1002.

Moreover, as described earlier with reference to Expression 10, various rotation angles can be adopted as the polar angle-rotated viewpoint, and while the magnitude of the observation angle φ also has an effect, a scattering image can be extracted even at a rotation angle of 45 degrees. Therefore, the scope of the present invention includes all cases regardless of what kind of values are adopted as the rotation angle of a polar angle-rotated viewpoint.

(Viewpoint Scattering Synthesized Image Generating Step S904)

A scattering image synthesis setting screen 1003 shown in FIG. 10C is an example of a setting screen that is displayed when the viewpoint scattering image synthesis setting button 706 is pressed. At this point, a compositing method used when synthesizing viewpoint scattering extracted images is set.

The setting screen 1003 has a list box for selecting a compositing method to be used when synthesizing the respective scattering extracted images. Various compositing methods such as “equal”, “Gaussian blur”, and “select/composite maximum value” can be selected. In this case, “equal” represents a method of compositing the respective scattering extracted images using equal weighting and “Gaussian blur” represents a method using weighting obtained by a Gaussian function in accordance with a distance from an origin (on the optical axis) of each viewpoint. In addition, “select/composite maximum value” represents a method of creating a composite image with a same size as the respective scattering extracted images by comparing pixel values at a same position on the respective scattering extracted images and selecting a maximum pixel value.

In the viewpoint scattering synthesized image generating step S904, a plurality of viewpoint scattering extracted images are synthesized to generate a viewpoint scattering synthesized image (second image for observation).

In addition, in the viewpoint scattering synthesized image generating step S904, noise elimination may be performed in order to eliminate noise included in scattering extracted images in a similar manner to the viewpoint scattering image extraction processing step S903. In this case, a noise elimination setting is configured on the setting screen 1003.

It should be noted that the setting screens shown in FIGS. 10A to 10C merely represent examples. A function for configuring default settings or a function that automatically sets optimum values is desirably provided so that a pathologist who is the user can promptly make observations and diagnoses without being hassled by settings.

This concludes the description of the viewpoint-decomposed scattering image extraction/synthesis processing (S802 in FIG. 8) according to the present example.

(Contour Extraction Processing)

Next, an example of contour extraction processing (S803 in FIG. 8) will be described.

While a scattering image is enhanced in a viewpoint scattering synthesized image, the viewpoint scattering synthesized image also contains high and low levels of noise and signals. In consideration thereof, contour extraction processing is performed to make a contour more visible. For example, a contour can be extracted by binarizing a viewpoint scattering synthesized image (a value determined in advance may be used as a binarization threshold or a binarization threshold may be dynamically determined) and subsequently repeating expansion/contraction processing. In addition, other contour extracting methods include various known techniques and, in this case, any method can be used. Furthermore, by adding a line thinning process, accuracy of positions where a contour exists can be improved. As a result of the processing, a contour extracted image is obtained from the viewpoint scattering synthesized image.

(Display/Analysis of Image)

Subsequently, after the image display processing S804, by displaying the viewpoint scattering extracted image, the viewpoint scattering synthesized image, or the contour extracted image on the image display application, a cell boundary between cells, a boundary between a cell and a sinusoid, and the like can be made more distinguishable. Accordingly, the pathologist can more easily visualize a three-dimensional structure of an affected tissue.

Furthermore, by invoking the extensions menu 710 by right-clicking the mouse in the window 700 and selecting an item such as N/C ratio (nucleus/cytoplasm ratio) calculation or the like, image analysis can be performed.

FIG. 12 shows an example of a processing flow of N/C ratio calculation.

N/C ratio calculation is premised on the use of two images, namely, an image in the selection region 207 in the left-side region 701 and a contour extracted image. Hereinafter, a portion of a nucleus in an image is referred to as a nucleus region, a portion of cytoplasm surrounding the nucleus is referred to as a cytoplasm region, and a combined whole of the nucleus region and the cytoplasm region is referred to as a cell region.

First, in a nucleus region determination processing step S1201, a nucleus region is determined. Examples of methods thereof include the following method. With HE staining, since the inside of a nucleus is stained deep blue, whether or not a region is a nucleus region can be determined based on whether or not pixels in the selection region 207 positioned inside a corresponding closed region in the contour extracted image belong in a prescribed color gamut range at a certain ratio or more. The ratio and the color gamut used for the determination may be learned in advance using a plurality of samples.

Next, in a cytoplasm region determination processing step S1202, a cytoplasm region is determined. With HE staining, a cytoplasm is stained in pink. Therefore, in a similar manner to the nucleus region determination processing, whether or not a region is a cell region can be determined based on whether or not pixels in the selection region 207 positioned inside a corresponding closed region in the contour extracted image belong in a prescribed color gamut range at a certain ratio or more. Subsequently, a cytoplasm region is identified by subtracting a closed region that is assumed to be a nucleus region in step S1201 from the cell region. The ratio and the color gamut used for this determination may also be learned in advance using a plurality of samples.

When automatic processing is unable to achieve sufficient accuracy, a region may be determined with an intervention (assistance) by the user. In this case, after step S1202, a setting screen that enables the user to correct a contour, a nucleus region, or a cell region is displayed on the GUI.

Finally, in an N/C ratio calculation processing step S1203, an area of the nucleus region is divided by an area of the cytoplasm region obtained above to calculate an N/C ratio.

The N/C ratio calculation flow described above is merely an example and various modification and improvements can be made thereto.

(Advantages of Present Example)

As described above, in the present example, taking advantage of a characteristic that intensity of scattered light at a sample surface varies significantly between viewpoint images with different polar angles θ of viewpoints, a scattering image of a sample can be extracted from a Z stack image. Therefore, a cell membrane, a cell boundary, and a boundary between a cell and a tube or a cavity which are useful when observing a sample can be clarified without having to modify optical systems or exposure conditions. In addition, an effect of increasing contrast and visualizing even surface unevenness that is hardly manifested as a variation in brightness or color can be produced. Accordingly, a diagnosis supporting function which includes presenting images useful for diagnosis and calculating an N/C ratio can be realized.

Moreover, while the present example is configured so that the viewpoint-decomposed scattering image extraction/synthesis processing is executed when the execute button 708 is pressed, the viewpoint-decomposed scattering image extraction/synthesis processing may be executed every time the setting parameters shown in FIG. 7B and FIGS. 10A to 10C are modified. As a result, processing results are to be displayed in real-time in synchronization with modifications made to the setting parameters. In the case of this configuration, the setting items shown in FIG. 7B and FIGS. 10A to 10C may be deployed and arranged in a single setting screen. Such an implementation is also included in the scope of the present invention.

Example 2

In the present example, a method of extracting a viewpoint scattering image by a computation between viewpoint images with a plurality of viewpoints with a same polar angle θ but different observation angles φ in the viewpoint scattering image extraction processing step S903 will be described.

As already described with reference to Expression 11, by performing subtraction between a viewpoint image with a prescribed viewpoint (s, t) and a viewpoint image with a viewpoint having a same polar angle θ but a different observation angle, information related to grayscale due to a difference in transmittance of the sample is canceled and a scattering image can be extracted efficiently.

FIG. 16 is a flow chart showing internal processing of the viewpoint scattering image extraction processing S903 according to the present example. Hereinafter, an extraction method of a scattering image of each viewpoint image will be described with reference to FIG. 16.

First, in an observation angle-modified viewpoint calculating step S1601, an observation angle-modified viewpoint with an observation angle at a different position from a viewpoint that is a processing object is calculated.

In the coordinate system shown in FIG. 13A, if a coordinate of the viewpoint P0 that is a processing object is denoted by (x, y)=(sP, tP), then an observation angle-modified viewpoint thereof can be set on a straight line connecting an optical axis (0, 0) and the viewpoint P0. While positions of various viewpoints can be selected, in this case, P1=(0, 0) is selected as an observation angle-modified viewpoint.

Next, in an observation angle-modified viewpoint image generating step S1602, a viewpoint image observed in a line-of-sight direction of the observation angle-modified viewpoint P1 calculated in step S1601 so that a position expressed by Z=Zf becomes a focusing position is generated. Since a method of generating a viewpoint image has already been described in the viewpoint image generating step S902, a description thereof will be omitted. Moreover, in a similar manner to Example 1, when a viewpoint image of the observation angle-modified viewpoint P1 has already been calculated, data existing in the storage device 130 or the main memory 302 of the image generating apparatus 100 is to be read out and used instead of recalculating the viewpoint image of the observation angle-modified viewpoint P1.

Next, in a viewpoint scattering image generating step S1603, image computation is performed between the viewpoint image (a first viewpoint image) at the viewpoint P0 that is a processing object and the viewpoint image (a second viewpoint image) at the observation angle-modified viewpoint P1. Due to the computation, information on scattered light at the sample surface as observed in an observation direction from the viewpoint P0 is extracted and outputted as a viewpoint scattering image (a first observation image).

If viewpoint images observed in line-of-sight directions of the viewpoint P0 and the observation angle-modified viewpoint P1 so that a position expressed as Z=Zf becomes a focusing position are respectively denoted by IP0 (X, Y, Zf) and IP1 (X, Y, Zf) and the viewpoint scattering image is denoted by SP0 (X, Y, Zf), then the computation is represented by Expression 13 in a similar manner to Example 1. The reason why information on scattered light at the sample surface can be obtained from the viewpoint scattering image SP0 (X, Y, Zf) is as described earlier with reference to Expression 11.

In addition, as already described with reference to Expression 12, information on a scattering image can also be extracted by a division between viewpoint images. In such a case, for example, the following computation may be used.


SP0(X,Y,Zf)=|D1(X,Y,Zf)|


D1(X,Y,Zf)=IP0(X,Y,Zf)/IP1(X,Y,Zf)  [Expression 16]

Moreover, Expressions 13 and 16 are examples of a computation between viewpoint images for generating a viewpoint scattering image, and various other computations can be used in a similar manner to Example 1.

In addition, the observation angle-modified viewpoint P1 may be at a position other than (x, y)=(0, 0) or may be acquired in plurality. For example, as an observation angle-modified viewpoint P2, a viewpoint at a prescribed distance from an optical axis (0, 0) or a viewpoint at a prescribed ratio (for example, half or ¼) on a straight line connecting the optical axis and the viewpoint P0 may be selected. In this case, Expression 15 can be used as an operational expression for obtaining a viewpoint scattering image.

Furthermore, even when there are a plurality of observation angle-modified viewpoints, a division between respective viewpoint images can be executed as represented by Expression 16 and an average of absolute values of the respective results may be taken as SP0 (X, Y, Zf). In a similar manner to Example 1, the user can select the number of observation angle-modified viewpoints or various operational expressions on the setting screen 1002.

(Advantages of Present Example)

As described above, in the present example, taking advantage of a characteristic that intensity of scattered light at a sample surface varies significantly between viewpoint images with different observation angles φ of viewpoints, a scattering image of a sample can be extracted from a Z stack image. Therefore, even with the method according to the present example, an observation image suitable for observation and diagnosis can be obtained in a similar manner to Example 1.

Example 3

In Examples 1 and 2, methods of calculating a viewpoint scattering synthesized image have been described. In the present example, a description will be given on a method of generating an observation image in which information on scattered light is enhanced and a depth of field is increased by compositing a viewpoint scattering synthesized image onto an original image (a layer image at Z=Zf that is a focus position in a Z stack image).

In Example 1, it has been described that a difference in transmitted light intensity between viewpoint images when varying viewpoints is very small and a scattering image representing surface unevenness can be generated by calculating a difference between viewpoint images. However, when a thickness of a sample is large, the number of objects existing at depths that differ from the position in focus (the focus position Zf) increases and information on blur of such objects may be retained in a scattering image. In addition, when an observation angle φ of a viewpoint increases, a blur of an object increases even when a difference between a position in a depth direction at which the object exists and the focus position Zf is small and a similar phenomenon occurs.

Hereinafter, an effect in a scattering image when an object exists at a position separated from a position in focus in the scattering image will be described.

First, a three-dimensional imaging relationship in an MFI arbitrary viewpoint/out-of-focus blur image generating method will be described.

With the MFI arbitrary viewpoint/out-of-focus blur image generating method, a relationship is established in which a group of out-of-focus blur images (a Z stack image) subjected to a coordinate transform is expressed by a convolution of a three-dimensional subject and a three-dimensional out-of-focus blur. If the three-dimensional subject is denoted by f(X, Y, Z), the three-dimensional out-of-focus blur is denoted by h(X, Y, Z), and the group of out-of-focus blur images (the Z stack image) subjected to a coordinate transform is denoted by g(X, Y, Z), then the following expression holds true.


g(X,Y,Z)=f(X,Y,Z)***h(X,Y,Z)  [Expression 17]

(where *** denotes three-dimensional convolution)

FIGS. 18A to 18F are schematic views showing differences in Z stack images when objects exist at different Z positions of a three-dimensional subject. With a three-dimensional subject 1801, an object exists at a position expressed as Z=Zf, and with a three-dimensional subject 1811, an object exists at a position expressed as Z=Zo.

By imaging the three-dimensional subjects 1801 and 1811 using an optical system having a three-dimensional out-of-focus blur 1802 while varying focusing positions in the Z direction, Z stack images 1803 and 1813 are respectively obtained. (Moreover, the three-dimensional out-of-focus blur h(X, Y, Z) is shift-invariantly transformed by a coordinate transform and remains the same). Obviously, an image with minimum out-of-focus blur is obtained at the position expressed as Z=Zf in the case of the Z stack image 1803 and at the position expressed as Z=Zo in the case of the Z stack image 1813.

Next, a relationship between an arbitrary viewpoint image and an arbitrary out-of-focus blur image in the MFI arbitrary viewpoint/out-of-focus blur image generating method will be described.

FIGS. 19A to 19H are schematic views showing a relationship between an arbitrary viewpoint image and an arbitrary out-of-focus blur image obtained by the MFI arbitrary viewpoint/out-of-focus blur image generating method when objects exist at different Z positions of a three-dimensional subject.

Reference numerals 1900 in FIG. 19A and 1910 in FIG. 19E denote three-dimensional subjects and are respectively the same as reference numerals 1801 in FIG. 18A and 1811 in FIG. 18D. Light beams depicted by solid lines in the three-dimensional subjects 1900 and 1910 represent light beams which pass through objects in the subjects and which pass through prescribed viewpoints on a lens plane of an optical system respectively corresponding to the objects. Moreover, a dashed line in FIG. 19E corresponds to a solid line in FIG. 19A.

Light beams 1900a and 1910a are light beams which pass through a given viewpoint a on the lens plane and viewpoint images observed in line-of-sight directions corresponding to the viewpoint so that a position expressed as Z=Zf becomes a focusing position are respectively denoted by reference numerals 1901 in FIG. 19B and 1911 in FIG. 19F. In a similar manner, light beams 1900b and 1910b are light beams which pass through a given viewpoint b on the lens plane and viewpoint images observed so that respective positions expressed as Z=Zf become focusing positions are denoted by reference numerals 1902 in FIG. 19C and 1912 in FIG. 19G.

In the viewpoint images 1901 and 1902, images of the object appear at a same position. Conversely, in the viewpoint images 1911 and 1912, images of the object do not appear at a same position and respectively appear shifted by an amount determined by a distance dZ (=Zo−Zf) between Zf and Zo and the line-of-sight direction.

In the MFI arbitrary viewpoint/out-of-focus blur image generating method, a layer image g at a position expressed by Z=Zf of a Z stack image can be represented by the expression below.


g(X,Y,Zf)=∫∫k(s,tas,t(X,Y,Zf)dsdt  [Expression 18]

In Expression 18, k (s, t) is a function that indirectly represents a three-dimensional out-of-focus blur of an imaging optical system and represents a relative intensity distribution of a light beam passing through respective viewpoints (s, t) on the lens plane. The following relationship holds true between k (s, t) and h(X, Y, Z).


h(X,Y,Z)=∫∫k(s,t)×δ(X+s×Z,Y+t×Z)dsdt  [Expression 19]

As shown in the expression below, it is assumed that k (s, t) is normalized so that a sum of all viewpoints (s, t) existing on the lens plane equals 1.


∫∫k(s,t)dsdt=1  [Expression 20]

In addition, in Expression 19, as,t (X, Y, Zf) represents a viewpoint image at Z=Zf.

Expression 18 shows that, by using the MFI arbitrary viewpoint/out-of-focus blur image generating method, an arbitrary out-of-focus blur image of a prescribed Z position (Z=Zf) can be reconstructed from a large number of viewpoint images at the prescribed Z position.

In the following description, a function that defines a weight for each viewpoint (s, t) (each line-of-sight direction) when synthesizing a plurality of images such as k (s, t) will be referred to as a “viewpoint weighting function”. A viewpoint weighting function can also be described as representing a relative intensity distribution of a light beam originating at a given point on the subject and passing through a viewpoint (s, t). k (s, t) described above is a viewpoint weighting function having properties that correspond to a three-dimensional out-of-focus blur of an imaging optical system used to image the subject. Moreover, any kind of function may be used as the viewpoint weighting function. For example, ka (s, t) corresponding to an arbitrary three-dimensional out-of-focus blur for generating an arbitrary out-of-focus blur image in the MFI arbitrary viewpoint/out-of-focus blur image generating method can also be used as a viewpoint weighting function. In addition, a function with an arbitrary shape such as a columnar shape exemplified in an example described later can also be used as a viewpoint weighting function. A viewpoint weighting function corresponds to a weighting setting that is selected on the scattering image synthesis setting screen 1003 described in Example 1.

Moreover, while Expression 18 is described using an integral ∫ signifying an integration of continuous values, since actual image processing is a computation of a plurality of (finite number of) discrete viewpoints (s, t), Expression 18 is correctly described using sigma Σ. However, since the use of integral ∫ makes the mathematical expression more easily viewable and enables generalization, integral ∫ will also be used in the following description. Moreover, when the viewpoints are discrete (finite), Expression 20 represents a normalization performed so that a sum of all viewpoint weighting functions k (s, t) or ka (s, t) used in the computation equals 1.

(Blurred Image at Z=Zf)

Next, a layer image at a position expressed by Z=Zf of a Z stack image is obtained from a viewpoint image at Z=Zf using the MFI arbitrary viewpoint/out-of-focus blur image generating method.

In this case, it is assumed that, at the three-dimensional subjects 1801 and 1811, a point object exists on an optical axis of the optical system and a viewpoint image (a full focused image) seen from a viewpoint passing through the optical axis is represented by the expression below.


I(X,Y,Zf)=(a−b)×δ(X,Y)+b,  [Expression 21]

where function δ denotes a Dirac function δ, a denotes intensity (a pixel value) of the point object, and b denotes intensity of the background.

Hereinafter, a layer image at a position expressed by Z=Zf of a Z stack image when imaging the three-dimensional subjects 1801 and 1811 is obtained using Expression 18.

(Layer Image at Z=Zf of the Three-Dimensional Subject 1801)

If the thickness of the object can be ignored, the viewpoint image as,t (X, Y, Zf) according to Expression 18 can be expressed as I (X, Y, Zf) regardless of the viewpoint position (s, t). Therefore, by substituting Expression 21 into Expression 18 and modifying Expression 18, a layer image represented by the following expression is obtained.


g(X,Y,Zf)=(a−b)×δ(X,Y)+b  [Expression 22]

(Layer Image at Z=Zf of the Three-Dimensional Subject 1811)

In a similar manner, if the thickness of the object can be ignored, the viewpoint image as,t (X, Y, Zf) according to Expression 18 can be expressed as a translation of I (X, Y, Zf), and by substituting Expression 21 into Expression 18 and modifying Expression 18 using Expressions 19 and 20, the following expression is obtained. It is shown that a blur due to a deviation dZ from the focus position is included.


g(X,Y,Zf)=(a−bh(X,Y,dZ)+b  [Expression 23]

(where dZ=Zo−Zf)

Out-of-focus blurred images (layer images) 1903 and 1913 in FIGS. 19D and 19H are schematic views of layer images at Z=Zf obtained from a plurality of viewpoint images corresponding to a large number of viewpoints set on the lens plane. The layer image 1903 represents an image corresponding to Expression 22 and the layer image 1913 represents an image corresponding to Expression 23.

In the layer image 1903, since a position of an image of the object does not shift in the respective viewpoint images, an image without blur is obtained. On the other hand, in the layer image 1913, since a position of the object in the viewpoint images shifts, an image including blur of the optical system is obtained.

Hereinafter, a description on image properties of the viewpoint scattering synthesized image described in Example 1 will be given in light of the relationship between an arbitrary viewpoint image and an arbitrary out-of-focus blur image in the MFI arbitrary viewpoint/out-of-focus blur image generating method described above.

(Viewpoint Scattering Synthesized Image)

A viewpoint scattering synthesized image obtained by calculating a plurality of viewpoints P0 and corresponding polar angle-rotated viewpoints P1, performing the computation represented by the expression below between the respective viewpoint images, and adding viewpoint scattering images SP0 of the plurality of viewpoints P0 in a similar manner to Example 1 will now be considered.

S P 0 ( X , Y , Zf ) = 1 2 × k ( s , t ) × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) , [ Expression 24 ]

where k (s, t) denotes a viewpoint weighting function corresponding to a blur in an imaging optical system. (In Example 1, since positions of viewpoints P0 and P1 are at equal distances from an origin on the optical axis, viewpoint weighting functions at the positions of viewpoints P0 and P1 have equal values in an optical system that is rotationally symmetrical with respect to the optical axis). Multiplication by ½ is performed in order to extract average intensity of scattered light components of viewpoint images IP1 (X, Y, Zf) and IP0 (X, Y, Zf), and prevents intensity from being doubled when performing the same calculation at the viewpoints P0 and P1.

From Expression 24, a viewpoint scattering synthesized image DS (X, Y, Zf) at Z=Zf of the three-dimensional subjects 1801 and 1811 can be calculated by the expression below.

DS ( X , Y , Zf ) = 1 2 × k ( s , t ) × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) s t [ Expression 25 ]

When a thickness of the object can be ignored, since the viewpoint images IP1 (X, Y, Zf) and IP0 (X, Y, Zf) can be regarded as the same in the three-dimensional subject 1801, the viewpoint scattering synthesized image is 0.

Next, in the three-dimensional subject 1801, assuming that IP1 (X, Y, Zf) and IP0 (X, Y, Zf) can be expressed as translations of I (X, Y, Zf), Expression 25 can be modified to obtain the following expression.

DS ( X , Y , Zf ) = a - b × 1 2 × k ( s , t ) × δ ( X - s × Z , Y - t × Z ) - δ ( X + s × Z , Y + t × Z ) s t [ Expression 26 ]

Since an expression in the integral of Expression 26 is 0 when s=t=0 and otherwise satisfies


∫∫k(s,t)×δ(X+s×dZ,Y+t×dZ)dsdt=h(X,Y,dZ),  [Expression 27]

Expression 26 can be modified as follows.


DS(X,Y,Zf)=|a−b|×{h(X,Y,dZ)−h(0,0,dZ)}  [Expression 28]

(When Point Object is Darker than its Surroundings)

Next, |a−b| in an viewpoint scattering synthesized image represented by Expression 28 will be considered.

In a case where a point object in the three-dimensional subject 1811 is an object that is darker than the background, for example, when observing an object with low transmittance such as in bright-field observation, a relationship expressed as 0<a<b can be considered to be satisfied in Expression 21. Therefore, Expression 28 can be modified to the expression below.


DS(X,Y,Zf)=(b−a)×{h(X,Y,dZ)−h(0,0,dZ)}  [Expression 29]

(where 0<a<b)

(Addition of Layer Image at Z=Zf of Z Stack Image and Viewpoint Scattering Synthesized Image)

Next, properties of a composite image representing an addition of a layer image g (Expression 23) at Z=Zf of a Z stack image obtained by performing imaging of the three-dimensional subject 1811 and a viewpoint scattering synthesized image DS (Expression 29) will be considered. A composite image ADD may be represented by the following expression.


ADD(X,Y,Zf)=g(X,Y,Zf)+DS(X,Y,Zf)=(a−bh(0,0,dZ)+b  [Expression 30]

Since h (0, 0, dZ) shares only intensity of a central part of an out-of-focus blur h (X, Y, dZ) at a position separated by dZ from a focusing position of a three-dimensional out-of-focus blur and is close to a point image, it is shown that a depth of field of an image obtained by Expression 30 is extremely deep.

Therefore, even when an out-of-focus blur image of an object with low transmittance (a case of 0<a<b) as in bright-field observation is included in a layer image at Z=Zf, a depth of field can be increased by adding a viewpoint scattering synthesized image.

Next, the description of the expressions above will be presented in an easier-to-understand manner using FIGS. 20A to 20F.

Image 2011 in FIG. 20D represents a layer image at Z=Zf of a Z stack image obtained by performing imaging of the three-dimensional subject 1811 and corresponds to Expression 23. Image 2012 in FIG. 20E represents a viewpoint scattering synthesized image at Z=Zf of a Z stack image obtained by performing imaging of the three-dimensional subject 1811 and corresponds to Expression 29. Image 2013 in FIG. 20F represents an image obtained by adding the viewpoint scattering synthesized image to the layer image at Z=Zf and corresponds to Expression 30.

Reference numerals 2001 in FIG. 20A, 2002 in FIG. 20B, and 2003 in FIG. 20C respectively denote brightness cross-sections in an X-direction of the images 2011, 2012, and 2013.

While brightness gradually increases from the center of the image due to the out-of-focus blur h (X, Y, dZ) of the imaging optical system in the layer image 2011, conversely, in the viewpoint scattering synthesized image 2012, brightness gradually decreases from the center of the image with the exception of the brightness being 0 at the center of the image. Therefore, as shown by the cross section 2003, by adding the viewpoint scattering synthesized image 2012 to the layer image 2011, a brightness variation due to the three-dimensional out-of-focus blur is canceled and the composite image 2013 is obtained in which objects at a center portion of the image can be clearly observed.

Moreover, in the case of the three-dimensional subject 1801 in which a point object exists at Z=Zf, since the layer image at Z=Zf is represented by Expression 22 and the viewpoint scattering synthesized image is 0, a result of an addition of both images remains the same layer image at Z=Zf.

Therefore, even when information on scattered light created by surface unevenness of a sample is included in a viewpoint scattering synthesized image and a blurred image of an object exists in a layer image at Z=Zf, enhancement of the scattered light and an increase in depth of field can be realized by an addition of the layer image and the viewpoint scattering synthesized image. Accordingly, an effect of clarifying an object that is darker than its surroundings such as a cell nucleus can be produced and an image that is useful in diagnosis and image analysis can be obtained.

(When Point Object is Brighter than its Surroundings)

When a point object in the three-dimensional subject 1811 is an object that is brighter than the background such as when observing a self-luminous fluorescent body as in the case of fluorescence observation, a relationship expressed as 0<b<a can be considered to be satisfied in Expression 21. Therefore, Expression 28 can be modified to the expression below.


DS(X,Y,Zf)=(a−b)×{h(X,Y,dZ)−h(0,0,dZ)}  [Expression 31]

(where 0<b<a)

(Subtraction Between Layer Image at Z=Zf of Z Stack Image and Viewpoint Scattering Synthesized Image)

Next, properties of a composite image representing a subtraction of a viewpoint scattering synthesized image DS (Expression 31) from the layer image g (Expression 23) at Z=Zf of a Z stack image obtained by performing imaging of the three-dimensional subject 1811 will be considered. A composite image SUB may be represented by the following expression.


SUB(X,Y,Zf)=g(X,Y,Zf)−DS(X,Y,Zf)=(a−bh(0,0,dZ)+b  [Expression 32]

In a similar manner to Expression 30, the expression above shows that a depth of field can be increased by subtracting a viewpoint scattering synthesized image even if an out-of-focus blur image of a self-luminous fluorescent body (when 0<b<a) as in the case of fluorescence observation is included in a layer image at Z=Zf.

Similarly, a description will be given using FIGS. 21A to 21F.

Image 2111 in FIG. 21D represents a layer image at Z=Zf of a Z stack image obtained by performing imaging of the three-dimensional subject 1811 and corresponds to Expression 23. Image 2112 in FIG. 21E represents a viewpoint scattering synthesized image at a position expressed by Z=Zf of a Z stack image obtained by performing imaging of the three-dimensional subject 1811 and corresponds to Expression 31. Image 2113 in FIG. 21F represents an image obtained by subtracting the viewpoint scattering synthesized image from the layer image at Z=Zf and corresponds to Expression 32.

Reference numerals 2101 in FIG. 21A, 2102 in FIG. 21B, and 2103 in FIG. 21C respectively denote brightness cross-sections in an X-direction of the images 2111, 2112, and 2113.

In the layer image 2111, brightness gradually decreases from the center of the image due to the out-of-focus blur h (X, Y, dZ) of the imaging optical system. In the viewpoint scattering synthesized image 2112, brightness gradually decreases from the center of the image with the exception of the brightness being 0 at the center of the image. Therefore, as shown by the cross section 2103, by subtracting the viewpoint scattering synthesized image 2112 from the layer image 2111, a brightness variation due to the three-dimensional out-of-focus blur is canceled and the composite image 2113 is obtained in which objects at a center portion of the image can be clearly observed.

Moreover, in the case of the three-dimensional subject 1801 in which a point object exists at Z=Zf, since the layer image at Z=Zf is represented by Expression 22 and the viewpoint scattering synthesized image is 0, a result of a subtraction between both images is the same layer image at Z=Zf.

Moreover, while it may seem that subtracting a viewpoint scattering synthesized image results in weakening information on the scattered light by the surface unevenness of the sample, by modifying a viewpoint weighting function or adjusting intensity when generating the viewpoint scattering synthesized image, a portion with intense scattered light can conversely be made darker than its surroundings. Therefore, even in the case of subtraction, a portion with intense scattered light can be made more prominent.

Even when information on scattered light created by surface unevenness of a sample is included in a viewpoint scattering synthesized image and a blurred image of an object exists in a layer image at Z=Zf, enhancement of the scattered light and an increase in depth of field can be realized by a subtraction between the layer image and the viewpoint scattering synthesized image. Accordingly, an effect of clarifying an object that is brighter than its surroundings such as a fluorescent body can be produced and an image that is useful in diagnosis and image analysis can be obtained.

(Modification of Operational Expression 24)

While k (s, t) corresponding to a three-dimensional out-of-focus blur h (X, Y, Z) of the imaging optical system has been used as a viewpoint weighting function in the operational expression between viewpoint images represented by Expression 24, ka (s, t) corresponding to any other three-dimensional out-of-focus blur ha (X, Y, Z) can also be used. The following relationship holds true between ka (s, t) and ha (X, Y, Z) in a similar manner to Expressions 19 and 20.


ha(X,Y,Z)=∫∫ka(s,t)×δ(X+s×Z,Y+t×Z)dsdt  [Expression 33]


∫∫ka(s,t)dsdt=1  [Expression 34]

In this case, an out-of-focus blur corresponding to ha (X, Y, Z) that is calculated from ka (s, t) appears on a brightness cross-section 2002 (or 2102) of the viewpoint scattering synthesized image 2012 (or 2112). Therefore, an effect of suppressing an increase in depth of field or an effect such as edge enhancement can be produced on a brightness cross-section 2003 (or 2103) of the composite image 2013 (or 2113).

In addition, viewpoints P0 and P1 in Expression 24 need not necessarily have a relationship in which respective polar angles θ differ from one another by 180 degrees. Furthermore, the viewpoint scattering image may be calculated using three or more viewpoint images such as the viewpoint positions P0, P1, and P2 in Expression 15. In any case, Expression 26 can be modified to Expression 28 and an effect of suppressing out-of-focus blur of an object in a layer image can be produced.

This concludes the description of the properties of composite images which are obtained by compositing images at a focus position in a Z stack image (layer images at Z=Zf) and viewpoint scattering synthesized images and which are generated in Examples 3 and 4. Hereinafter, these composite images will be referred to as focus position and scattering composite images.

Next, a processing flow of generating a focus position and scattering composite image (a third observation image) according to the present example will be described.

(Overview of Image Composition Processing)

FIG. 17 is a flow chart showing an overall flow of generating a focus position and scattering composite image according to the present example.

First, in a Z stack image acquiring step S1701, processing similar to that of step S801 in FIG. 8 is performed to acquire data of a Z stack image in a range necessary for subsequent calculations.

Next, in a viewpoint-decomposed scattering image extraction/synthesis processing step S1702, processing similar to that of step S802 is performed. First, viewpoint images corresponding to a plurality of viewpoints are generated from a Z stack image based on the viewpoint decomposition setting (704), and a computation between viewpoint images is performed based on the viewpoint scattering image extraction setting (705) to generate a viewpoint scattering image. Finally, based on the viewpoint scattering image synthesis setting (706), a viewpoint scattering synthesized image that synthesizes a plurality of viewpoint scattering images is generated.

Next, in a contour extraction processing step S1703, processing similar to that of step S803 is performed to generate a contour extracted image which represents a contour extracted from the viewpoint scattering synthesized image. This processing is not essential in a similar manner to step S803.

Next, in an image composition processing step S1704 that is a characteristic configuration of the present example, a focus position and scattering composite image that is an image obtained by compositing a viewpoint scattering synthesized image on an image at a focus position in a Z stack image (a layer image at Z=Zf) is generated. Details will be given later.

Finally, in an image display processing step S1705, the focus position and scattering composite image obtained in step S1704 is displayed in the right-side region 702 of the window 700 in a similar manner to step S804. In this example, the focus position and scattering composite image is an observation image suitable for image observation and image diagnosis. In a similar manner to step S804, since an image at the focus position is displayed in the left-side region 701 and the focus position and scattering composite image is displayed in the right-side region 702 in the window 700, observation can be performed by pair comparison. Furthermore, image analysis such as N/C ratio calculation can be executed by selecting a prescribed region in the window.

Hereinafter, internal processing of the image composition processing step S1704 will be described in detail.

FIG. 22 is a flow chart showing internal processing of the image composition processing step S1704.

First, in a focus position image reading step S2201, a layer image at an observation object position (focus position) Z=Zf in a Z direction in the Z stack image is read out from the Z stack image. Moreover, in the following description, a layer image at an observation object position Z=Zf in a Z direction in the Z stack image will be referred to as a focus position image in the sense of an image at a position that is brought into focus. Details will be given later.

Next, in an image composition setting reading step S2202, an image composition setting for determining calculation conditions of a subsequent focus position and scattering composite image generation processing step S2204 (to be described later) is read out. Details will be given later.

Next, in an image composition computation method determining step S2203, an image composition computation method that is one of calculation conditions of a subsequent step S2204 is determined. Moreover, the processing of S2203 is optional and not essential. Details will be given later.

Next, in the focus position and scattering composite image generation processing step S2204, an image composition computation is performed between the focus position image read in step S2201 and the viewpoint scattering synthesized image calculated in step S1702 to generate a focus position and scattering composite image. In this composite image generation processing, an image subjected to the contour extraction processing of step S1703 may be composited on the focus position image in place of the viewpoint scattering synthesized image. Details will be given later.

Finally, in a tone correcting step S2205, tone correction is executed in order to make the focus position image (displayed in the left-side region 701 of the window 700) and the focus position and scattering composite image (displayed in the right-side region 702 of the window 700) more easily comparable. Moreover, the processing of step S2204 is optional and not essential.

(Focus Position Image Reading Step S2201)

In step S2201, the layer image at the observation object position (Z=Zf) displayed in the left-side region 701 of the window 700 is read out from the Z stack image acquired in step S1701. Moreover, a focus position image can also be selected by other methods. For example, the user may be enabled to specify a focus position image (which may differ from the layer image at the observation object position which is displayed in the left-side region of the window 700) to be used to generate the composite image. Alternatively, a brightness variation in the Z direction of the Z stack image may be studied, a Z position (Z=Zf) with a maximum brightness variation (contrast) among a large number of pixels may be automatically selected, and a layer image at the Z position may be adopted as the focus position image. In addition to the above, various methods of selecting a focus position image can be used. Moreover, a Z position in the Z stack image of the focus position image determined in step S2201 and a Z position used to calculate viewpoint images of the viewpoint scattering synthesized image in step S1702 must be consistent with one another.

(Image Composition Setting Reading Step S2202)

In the image composition setting reading step S2202, an image composition setting to be used in the subsequent step S2204 is read out. The image composition setting that is read out in the present step S2202 is information inputted in advance via a setting screen of the scattering image extracting function. Hereinafter, items that are set on the setting screen of the scattering image extracting function according to the present example will be described.

(Image Composition Setting)

FIG. 23A shows a setting screen for scattering image extraction/synthesis according to the present example, and FIG. 23B shows an image composition setting screen. By enabling a check box of an overlaid display 2307 of a setting dialog 2300, a setting button for an image composition setting button 2308 is enabled and an image composition setting can be configured using an image composition setting dialog box denoted by 2320. Moreover, since setting screens of a viewpoint decomposition setting 2304, a viewpoint scattering image extraction setting 2305, and a viewpoint scattering image synthesis setting 2306 are respectively the same as the settings having the same names in FIG. 7B, a description thereof will be omitted.

In the image composition setting 2320, settings for determining an image composition computation method in subsequent step S2204 and settings to be used in the tone correction in step S2205 are inputted.

For example, as a computation method (2321) in the image composition setting 2320, computation methods for image composition such as “addition”, “subtraction”, and “automatic determination” can be selected from a drop-down list.

When “automatic determination” is selected as the computation method (2321), conditions to be automatically determined are selected from a drop-down list of automatic determination (2322). “Sample observation condition”, “image information”, and the like can be selected as an automatic determination condition.

For composition intensity (2323), composition intensity of the viewpoint scattering synthesized image in the focus position and scattering composite image generation processing S2204 is numerically inputted in an edit box.

For tone correction (2324), a tone correction method to be used after compositing the viewpoint scattering synthesized image on the focus position image is selected. “No correction” for a case where no processing is performed, “maintain average value” for maintaining an average brightness value between before and after composition, and “maintain mode value” for maintaining a mode value of a histogram before and after composition can be selected from a drop-down list.

In the image composition setting reading step S2202, setting information configured in advance by the image composition setting 2320 described earlier is read out.

(Image Composition Computation Method Determination S2203)

In the image composition computation method determining step S2203, when the computation method (2321) of the image composition setting read out in step S2202 is “automatic determination”, processing for determining a computation method of image composition is performed based on the conditions set in automatic determination (2322).

When “sample observation condition” is selected in automatic determination (2322), a computation method of image composition is determined based on conditions of an image pickup apparatus when acquiring a Z stack image from a sample such as information including “bright-field observation image” and “fluorescence observation”.

Moreover, conditions under which a Z stack image is taken may be acquired by any method. For example, conditions may be described in a prescribed location in a file format for storing data of the Z stack image or may be simultaneously transmitted as separate data when the Z stack image is transmitted to the image generating apparatus 100 from a virtual slide system. In such a case, information on a “sample observation setting” is acquired from the file format of the Z stack image or the image generating apparatus 100.

Alternatively, a method using a two-dimensional barcode described in the label area 412 of a prepared slide of a pathological sample shown in FIG. 4 or an IC chip (not shown) attached to a prepared slide is also favorable. For example, information describing a sample observation condition may be stored in a barcode or an IC chip. Alternatively, a configuration is also favorable in which ID information of a prepared slide is recorded in a barcode or an IC chip and an association between the ID information and a sample observation condition is stored in a database of another computer system 140. In such a case, another computer system 140 may be accessed based on the ID information of the prepared slide to acquire the sample observation condition.

In step S2203, when the acquired “sample observation condition” is “bright-field observation”, an object with low transmittance is determined to be an observation object and “addition” is automatically selected as an image composition computation method. In addition, when the acquired “sample observation condition” is “fluorescence observation”, a self-luminous object is determined to be an observation object and, desirably, “subtraction” is automatically selected as an image composition computation method.

When “image information” is selected in automatic determination (2322), in step S2203, analysis of brightness information of the Z stack image is performed and an image composition computation method is automatically determined according to results of the analysis. While there are various methods of analyzing an image, for example, an average brightness of an entire Z stack image is calculated, and when the average brightness is higher than a prescribed threshold, “addition” is automatically selected as the image composition computation method. Conversely, when the average brightness is lower than the threshold, “subtraction” is automatically selected as the image composition computation method. As another analysis method, for example, a valley of a brightness histogram of a Z stack image is calculated using a dynamic threshold method or the like from the brightness histogram, whereby “addition” is selected when the valley is positioned in a low-brightness region and “subtraction” is selected when the valley is positioned in a high-brightness region. Moreover, an image used for analysis may be all of or a part of layer images included in a Z stack image. In addition, an entire region or only a partial region of a layer image may be used for analysis.

(Focus Position Scattering Image Composited Image Generation Processing Step S2204)

In the focus position and scattering composite image generation processing step S2204, a composition computation of the focus position image and the viewpoint scattering synthesized image is executed based on settings of step S2202 (as well as the image composition computation method determined in step S2203).

As described earlier, if a focus position image is denoted by g (X, Y, Zf) and the viewpoint scattering synthesized image (a second observation image) is denoted by DS (X, Y, Zf), then the focus position and scattering composite image (a third observation image) calculated in step S2204 may be represented by the expression below.


Comp(X,Y,Zf)=g(X,Y,Zf)+α×DS(X,Y,Zf)  [Expression 35]

An absolute value of a composition coefficient α is the value set in composition intensity (2323) shown in FIG. 23B. The composition coefficient α has a “+(positive)” sign when the image composition computation method is “addition” and has a “− (negative)” sign when the image composition computation method is “subtraction”.

(Tone Correcting Step S2205)

Next, in the tone correcting step S2205, a tone of the focus position and scattering composite image is corrected according to settings of tone correction (2324) in the image composition setting read out in step S2202. For example, when “maintain average value” has been set in tone correction (2324), a corrected focus position and scattering composite image Comp′ (X, Y, Zf) is obtained by the following computation.


Comp′(X,Y,Zf)=Comp(X,Y,Zf)−α×mDS,  [Expression 36]

where mDS denotes an average value of pixels in a viewpoint scattering synthesized image DS (X, Y, Zf). The composition coefficient α is the same as α in Expression 35. In other words, when the image composition computation method is “addition”, the average value mDS is uniformly subtracted from all pixels of the composite image obtained in step S2204, and when the image composition computation method is “subtraction”, the average value mDS is uniformly added to all pixels of the composite image. According to this computation, an increase in brightness (or a decrease in brightness) of the composite image Comp (X, Y, Zf) due to the addition (or subtraction) of the viewpoint scattering synthesized image DS (X, Y, Zf) is canceled. Therefore, brightness of the focus position image g (X, Y, Zf) and brightness of the corrected composite image Comp′ (X, Y, Zf) can be balanced and comparative observation can be performed more easily. Moreover, mDS may be obtained and used for correction for each RGB channel.

Moreover, in the computations of Expressions 35 and 36, pixel values smaller than 0 may be set to 0 and pixel values larger than a maximum tone (for example, 255 in the case of 8 bits) may be set to 255. Alternatively, in the vicinities of 0 and the maximum tone, pixel values may be gradually converged to 0 or the maximum tone using a non-linear tone curve.

Moreover, since Expressions 35 and 36 can be integrated into one expression, the tone correction processing of step S2205 may be simultaneously executed in step S2204.

In addition, when “maintain mode value” is selected in tone correction (2324), a tone of the focus position and scattering composite image Comp (X, Y, Zf) is corrected so that a mode value (peak) of the histogram is consistent with that of the focus position image g (X, Y, Zf). At this point, a mode value of a histogram may be calculated for each RGB channel and the tone may be corrected per channel, or the tone may be corrected so that mode values of brightness histograms are conformed to one other. Even with such tone correction, the brightness or the color of the focus position image g (X, Y, Zf) and the brightness or the color of the corrected composite image Comp′ (X, Y, Zf) can be balanced and comparative observation can be performed more easily.

Moreover, when “no correction” is selected in tone correction (2324), processing is not performed in step S2205.

(Advantages of Present Example)

With the configuration according to the present example, even if an out-of-focus blur image of an object (for example, a cell nucleus) is included in a focus position image (a layer image at Z=Zf) that is an original image, the out-of-focus blur can be improved by compositing a viewpoint scattering synthesized image. Accordingly, an observation image with a deeper depth of field (with improved sharpness of objects not at the focus position) than the original image is obtained and image diagnosis and image analysis can be performed with greater ease. While a full focused image obtained by a general focus compositing method (focus stacking) is capable of eliminating an out-of-focus blur of an object in a depth direction, a stereoscopic effect problematically declines. In contrast, with the focus position and scattering composite image according to the present example, since information on scattered light is enhanced, a stereoscopic effect does not decline unlike with a full focused image.

Example 4

With the MFI Arbitrary viewpoint/out-of-focus blur image generating method (the method described in Non-Patent Literature 2, 3 and 4), an image having an arbitrary out-of-focus blur can be generated by varying a viewpoint weighting function that is used when synthesizing a plurality of viewpoint images. In doing so, by using a viewpoint weighting function (hereinafter, referred to as an “imaging optical system weighting function”) representing a relative intensity distribution corresponding to a three-dimensional out-of-focus blur of an imaging optical system, an original focus position image (layer image) can be reconstructed from a plurality of viewpoint images.

The present inventors have found that, by using a viewpoint weighting function that gives a greater weight to a viewpoint with a relatively large observation angle φ than an imaging optical system weighting function, an image with a greater contrast of scattered light created by surface unevenness of a sample than an original layer image can be reconstructed. This is because contrast of scattered light increases in images with large observation angles φ as described with reference to Expressions 6 and 9. Taking advantage of such properties, by using an arbitrary out-of-focus blur image generated by modifying the viewpoint weighting function in place of a layer image, information on scattered light can be more enhanced than the method according to Example 3.

In addition, with the method according to Example 3, favorably, a viewpoint scattering synthesized image obtained from as many viewpoint images as possible (of all polar angles and observation angles) is composited on a focus position image in order to cancel an out-of-focus blur of the imaging optical system that is included in the focus position image. However, the larger the number of viewpoint images (the number of viewpoints), the longer the time required to generate a focus position and scattering composite image.

In consideration thereof, in Example 4, a description will be given on a method of generating a focus position and scattering composite image having a greater contrast of scattered light using a smaller number of viewpoints by compositing a viewpoint image and a viewpoint scattering image for each viewpoint and synthesizing the composited images using a unique viewpoint weighting function.

First, a viewpoint weighting function that produces greater relative intensity at a viewpoint with a large observation angle φ than an imaging optical system weighting function k (s, t) will be described in specific terms. Moreover, as shown by Expressions 4 and 5, an observation angle φ increases in accordance with a distance (=(s2+t2)1/2) from an origin of a viewpoint. In this case, since a same weight is given to viewpoints (line-of-sight directions) with the same observation angle φ, the viewpoint weighting function has a shape that is rotationally symmetrical around the optical axis. Therefore, the viewpoint weighting function may be defined by a function k (φ) of the observation angle φ.

(Viewpoint Weighting Function that Produces Greater Relative Intensity at Viewpoint Position with Large Observation Angle φ)

FIGS. 25A and 25B show sectional views of two viewpoint weighting functions with different relative intensities with respect to the observation angle φ.

FIG. 25A shows a viewpoint weighting function represented by a Gaussian function and FIG. 25B shows a viewpoint weighting function represented by a columnar shape. rm denotes a distance from an origin (0, 0) of an outermost viewpoint on the lens plane among viewpoints used to generate an arbitrary out-of-focus blur image.

The viewpoint weighting function shown in FIG. 25A can be represented by the following expression.

k a ( s , t ) = 1 2 πσ 2 exp ( - s 2 + t 2 2 σ 2 ) , [ Expression 37 ]

where σ denotes a standard deviation of a Gaussian function. A spread of blur can be controlled by σ.

In addition, the viewpoint weighting function shown in FIG. 25B can be represented by the following expression.

k a ( s , t ) = { 1 / π r m 2 , ( s 2 + t 2 r m ) 0 , ( s 2 + t 2 > r m ) [ Expression 38 ]

Moreover, the relationship represented by Expressions 33 and 34 holds true for ka (s, t).

As depicted in a schematic sectional view shown in FIG. 25A, with a viewpoint weighting function represented by Expression 37, relative intensity decreases as a distance (=(s2+t2)1/2) from an origin of the viewpoint (s, t) increases. On the other hand, as shown in FIG. 25B, a viewpoint weighting function represented by Expression 38 has a constant relative intensity regardless of the distance from the origin of the viewpoint (s, t). Relative intensity that is plotted on an ordinate corresponds to magnitude of weight.

Let us assume that an imaging optical system weighting function is represented by Expression 37 and a viewpoint weighting function of an arbitrary out-of-focus blur image calculated by the MFI arbitrary viewpoint/out-of-focus blur image generating method is represented by Expression 38. In this case, in the arbitrary out-of-focus blur image, there is a greater effect of a viewpoint image with a large observation angle φ as compared to an original focus position image and a scattered light component appears strongly.

Moreover, although the three-dimensional out-of-focus blur of the imaging optical system cannot be exactly expressed by the viewpoint weighting function of Expression 37 due to the influence of a wave-optical blur, various kinds of aberration, or the like, the viewpoint weighting function of Expression 37 provides a relatively favorable approximation.

(Overview of Calculation in Present Example)

In the present example, an arbitrary out-of-focus blur image a (X, Y, Zf) is used in place of the focus position image g (X, Y, Zf) represented by Expression 35 according to Example 3. A focus position and scattering composite image (a fourth observation image) according to the present example can be represented by the expression below.


Comp(X,Y,Zf)=a(X,Y,Zf)+α×DS(X,Y,Zf)  [Expression 39]

a (X, Y, Zf) and DS (X, Y, Zf) can be respectively represented by the following expressions.

a ( X , Y , Zf ) = k a ( s , t ) × I P 0 ( X , Y , Zf ) s t [ Expression 40 ] DS ( X , Y , Zf ) - 1 2 × k a ( s , t ) × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) s t [ Expression 41 ]

In this case, IP0 (X, Y, Zf) and IP1 (X, Y, Zf) are respectively viewpoint images observed in line-of-sight directions of viewpoints P0 and P1 so that a position expressed as Z=Zf becomes a focusing position.

Therefore, by substituting Expressions 40 and 41 into Expression 39, the focus position and scattering composite image can be represented by the expression below.


Comp(X,Y,Zf)=∫∫ka(s,tCP0(X,Y,Zf)dsdt  [Expression 42]

However, CP0 (X, Y, Zf) is represented by the following expression.

C P 0 ( X , Y , Zf ) = I P 0 ( X , Y , Zf ) + α × 1 2 × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) [ Expression 43 ]

In other words, the processing according to the present example is equivalent to processing for generating a focus position and scattering composite image (a fourth observation image) Comp (X, Y, Zf) by compositing a viewpoint image (a first viewpoint image) and a viewpoint scattering image (a first observation image) for each viewpoint to generate a focus position and scattering composite image (composite image) CP0 (X, Y, Zf) of each viewpoint, multiplying the focus position and scattering composite images (composite images) CP0 (X, Y, Zf) by a viewpoint weighting function ka (s, t), and synthesizing the results thereof. In other words, the observation image generation processing according to the present example is equivalent to processing in which composition of a viewpoint scattering image on a viewpoint image is performed a plurality of times by changing a combination of the two viewpoint images (used to generate the viewpoint scattering image) and the plurality of obtained composite images are synthesized.

An overall processing flow is approximately the same as in Example 1 (FIG. 8). However, in the present example, focus position and scattering composite image generation processing shown in FIG. 24 is executed in place of step S802.

(Focus Position Scattering Image Composited Image Generation Processing)

FIG. 24 is a flow chart showing focus position and scattering composite image generation processing according to the present example. Processing of the respective steps in the flow chart shown in FIG. 24 is approximately the same as the processing shown in FIG. 9 and described in Example 1 with the exception of per-viewpoint focus position and scattering composite image generation processing S2403.

First, in a viewpoint acquisition processing step S2401, positional information of a viewpoint necessary for generating a viewpoint image is acquired in a similar manner to step S901.

Next, in a viewpoint image generating step S2402, a plurality of viewpoint images respectively corresponding to the plurality of viewpoints acquired in step S2401 are generated in a similar manner to step S902.

Next, in a per-viewpoint focus position and scattering composite image generation processing step S2403, based on settings in the viewpoint scattering image extraction setting (705), a corresponding viewpoint scattering image is composited for each viewpoint image generated in step S2402. As a result, a focus position and scattering composite image of each viewpoint is obtained. A focus position and scattering composite image CP0 (X, Y, Zf) of a viewpoint P0 can be calculated using Expression 43 described earlier.

IP0 (X, Y, Zf) and IP1 (X, Y, Zf) in Expression 43 are respectively viewpoint images at the viewpoints P0 and P1. ½×|IP1 (X, Y, Zf)−IP0 (X, Y, Zf)| represents a viewpoint scattering image at the viewpoint P0. A sign and an absolute value of the composition coefficient α are determined based on the computation method (2321) and the composition intensity (2323) in the image composition setting 2320 in FIG. 23B in a similar manner to Example 3. (Although omitted in FIG. 24, the same processing as insteps S2202, S2203, and S2204 in FIG. 22 may be performed in step S2403).

After completing the calculation of step S2403 on all viewpoints set in step S2401, processing proceeds to a focus position and scattering composite image generating step S2404. In the focus position and scattering composite image generating step S2404, respective focus position and scattering composite images of a plurality of viewpoints are synthesized to generate a focus position and scattering composite image. Step S2404 differs from step S904 according to Example 1 in that respective focus position and scattering composite images of a plurality of viewpoints are synthesized in step S2404 while viewpoint scattering extracted images are synthesized in step S904. The processing of step S2404 may be represented by Expression 42 described earlier.

Moreover, the processing of steps S2403 and S2404 may be realized in combination as described below. First, before entering a viewpoint loop, an image buffer for storing focus position and scattering composite images is created and initialized by 0. Next, in step S2403, a product of a focus position and scattering composite image CP0 (X, Y, Zf) and a corresponding viewpoint weighting function ka (s, t) is calculated for each viewpoint, and obtained pixel values are accumulated and added in the image butter. Upon completion of processing on all viewpoints, a focus position and scattering composite image is stored in the buffer. This method is suitable for reducing used memory in software processing.

ka (s, t) corresponds to a setting value of weighting on the scattering image synthesis setting screen 1003 shown in FIG. 10C. Although not shown, weighting corresponding to various types of out-of-focus blur can be selected on the setting screen 1003. For example, an imaging optical system weighting function, a viewpoint weighting function (Expression 37 with σ modified) corresponding to a three-dimensional out-of-focus blur having a greater out-of-focus blur than an imaging optical system, or a viewpoint weighting function (Expression 38) with uniform weight regardless of viewpoint can be selected. In this case, by selecting a viewpoint weighting function that gives a greater weight to a portion with a large observation angle φ than an imaging optical system weighting function, information on scattered light created by surface unevenness of a sample of a focus position and scattering composite image can be further enhanced.

Due to the processing described above, a focus position and scattering composite image according to the present example can be generated.

Moreover, in the present example, a focus position and scattering composite image of each viewpoint is generated in step S2403 and the focus position and scattering composite images are synthesized in step S2404. However, a viewpoint scattering image of each viewpoint may be generated in step S2403 and a viewpoint image and the viewpoint scattering image of each viewpoint may be composited in S2404 to generate a focus position and scattering composite image.

(Advantages of Present Example)

In the present example, since a focus position and scattering composite image can be generated using a smaller number of viewpoints than in the method according to Example 3, high-speed processing can be realized. In addition, by appropriately designing (selecting) a viewpoint weighting function, a degree of intensity of scattered light in the focus position and scattering composite image can be controlled. In particular, by using a viewpoint weighting function that gives a greater weight to a viewpoint position with a large observation angle φ than an imaging optical system weighting function, contrast of scattered light can be increased in comparison to the method according to Example 3 and surface unevenness of a sample can be made more easily observable. Furthermore, in a similar manner to Example 3, an observation image with improved out-of-focus blur (with improved sharpness of objects outside of the focus position) of the imaging optical system included in an original image can be obtained and image diagnosis and image analysis can be performed with greater ease. In addition, the focus position and scattering composite image according to the present example has both a deep depth of field and a stereoscopic effect created by the enhancement of information on scattered light in a similar manner to Example 3.

Example 5

As described at the beginning of Example 3, when an object exists at a position separated from a position in focus in a Z stack image, an image of an out-of-focus blur of the object is included in a viewpoint scattering synthesized image.

In the present example, by taking advantage of features of a focus position and scattering composite image described in Example 4, a method of suppressing the effect of an out-of-focus blur of an object at a position separated from a position in focus included in a viewpoint scattering synthesized image will be described.

In Example 4, it has been described that using a viewpoint weighting function ka (s, t) with properties that differ from a three-dimensional out-of-focus blur of an imaging optical system increases a depth of field of a focus position and scattering composite image and further intensifies information on scattered light.

An analysis of features of an focus position and scattering composite image obtained in Example 4 reveals that an interesting phenomenon is taking place. When comparing two focus position and scattering composite images generated by setting α=1 and varying the viewpoint weighting function ka (s, t) in Expression 43, an out-of-focus blur is canceled and an extremely deep depth of field is produced in both focus position and scattering composite images despite having different intensity of scattered light.

In other words, by calculating a difference between two focus position and scattering composite images generated by varying the viewpoint weighting function ka (s, t), the effect of an out-of-focus blur included in a viewpoint scattering synthesized image can be reduced and information on scattered light created by surface unevenness of a sample can be accurately extracted. In addition, as will be described later, information on scattered light can also be extracted by calculating a ratio between two focus position and scattering composite images.

(Overview of Calculation in Present Example)

Hereinafter, a description will be given with reference to mathematical expressions.

Focus position scattering image composited images obtained using different viewpoint weighting functions ka1 (s, t) and ka2 (s, t) according to Example 4 may be respectively represented by the following expressions.


Compa1(X,Y,Zf)=∫∫ka1(s,tCP0(X,Y,Zf)dsdt


Compa2(X,Y,Zf)=∫∫ka2(s,tCP0(X,Y,Zf)dsdt  [Expression 44]

Moreover, since a case of α=1 in Expression 43 is considered in the present example, CP0 (X, Y, Zf) may be expressed as follows. Expression 45 represents processing of compositing a viewpoint scattering image (a first observation image) on a viewpoint image (a first viewpoint image) IP0 (X, Y, Zf) at a viewpoint P0 to generate a focus position and scattering composite image (composite image) CP0 (X, Y, Zf) of each viewpoint.

C P 0 ( X , Y , Zf ) = I P 0 ( X , Y , Zf ) + 1 2 × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) [ Expression 45 ]

It is assumed that a viewpoint P1 has a same observation angle but a different polar angle with respect to the viewpoint P0. While a difference in polar angles may be set to any value, as already described, a case where polar angles differ by 180 degrees produces maximum effect. Let us assume that, in Expression 44, a sum of a function of a first weight ka1 (s, t) is greater than that of a second weight ka2 (s, t) when the observation angle is equal to or greater than a prescribed value. This relationship may be represented by a mathematical expression as follows.


∫∫ka1(s,t)×outr(s,t)dsdt>∫∫ka2(s,t)×outr(s,t)dsdt,  [Expression 46]

where outr (s, t) is represented by the following expression and rth takes a prescribed value that is equal to or smaller than rm.

outr ( s , t ) = { 0 , ( s 2 + t 2 < r th ) 1 , ( s 2 + t 2 r th ) [ Expression 47 ]

Expression 46 holds true when ka1 (s, t) and ka2 (s, t) are respectively represented by Expressions 38 and 37 and have a relative intensity shown in FIGS. 25B and 25A, and rth is set to rm/2.

As shown in the following expression, a viewpoint scattering synthesized image (a fifth observation image) DS (X, Y, Zf) according to the present example is assumed to be a difference between a first image Compa1 (X, Y, Zf) generated using the first weight ka1 (s, t) and a second image Compa2 (X, Y, Zf) generated using the second weight ka2 (s, t).


DS(X,Y,Zf)=Compa1(X,Y,Zf)−Compa2(X,Y,Zf)  [Expression 48]

By substituting Expressions 44 and 45 into Expression 48 and modifying Expression 48, the following expression is obtained.


DS(X,Y,Zf)=∫∫kex(s,tCP0(X,Y,Zf)dsdt  [Expression 49]

kex (s, t) is a viewpoint weighting function for extracting scattered light information and is represented by the expression below.


kex(s,t)=ka1(s,t)−ka2(s,t)  [Expression 50]

Given the condition that respective integrations of ka1 (s, t) and ka2 (s, t) are 1, an integration (in other words, a total weight with respect to all integrated line-of-sight directions) of kex (s, t) is 0.


∫∫kex(s,t)dsdt=0  [Expression 51]

In addition, by modifying Expression 46, kex (s, t) satisfies the following condition.


∫∫kex−(s,t)×outr(s,t)dsdt>0  [Expression 52]

Now, in Expression 49, let us assume that CP0 (X, Y, Zf) calculated in Expression 45 is the viewpoint scattering image SP0 (X, Y, Zf) according to Example 1 and that kex (s, t) is the viewpoint weighting function selected in the viewpoint scattering image generating step S904. Accordingly, it is shown that a highly accurate viewpoint scattering synthesized image in which the effect of an out-of-focus blur has been suppressed can be generated by the same configuration as Example 1.

In addition, Expression 49 shows that kex (s, t) can be freely designed as long as conditions represented by Expressions 51 and 52 are satisfied.

FIGS. 26A and 26B show examples of the viewpoint weighting function kex (s, t) for extracting scattered light information. FIG. 26A shows a viewpoint weighting function for extracting scattered light information obtained by subtracting the viewpoint weighting function shown in FIG. 25A from the viewpoint weighting function shown in FIG. 25B. In addition, FIG. 26B shows a viewpoint weighting function for extracting scattered light information designed so as to have 0, negative values, and positive values depending on a distance from an origin of a viewpoint. Both satisfy the condition of Expression 52. The greater the relative intensity of a viewpoint at a position separated from the origin, the more intensely information on scattered light created by surface unevenness of the sample can be extracted.

Moreover, adopting kex (0, 0)=0 further increases the effect created by kex (s, t). This is because, as described with reference to Expression 30, while an out-of-focus blur of a focus position and scattering composite image in which α=1 is canceled, intensity h (0, 0, dZ) at a center portion of a three-dimensional out-of-focus blur is affected by a value of a viewpoint weighting function at an origin.

As described above, a highly accurate viewpoint scattering synthesized image in which the effect of an out-of-focus blur has been suppressed according to the present example can be applied to the configuration of Example 1.

Hereinafter, a case where the computations described by Expressions 45 and 49 are applied to the configuration of Example 1 will be described. Processing will be described using the flow chart of the viewpoint-decomposed scattering image extraction/synthesis processing step S802 shown in FIG. 9.

First, in the viewpoint acquisition processing step S901, positional information of a viewpoint necessary for generating a viewpoint image is acquired via the viewpoint decomposition setting (1001). Since a viewpoint position of a viewpoint P1 is required in addition to that of a viewpoint P0 in the subsequent step S902, the position of the viewpoint P1 is also calculated based on the position of the viewpoint P0.

Next, in the viewpoint image generating step S902, viewpoint images of all viewpoints P0 acquired in step S901 are calculated. In addition, viewpoint images are also calculated for the viewpoints P1 corresponding to the respective viewpoints P0. Details will be omitted.

Next, in the viewpoint scattering image extraction processing step S903, assuming that CP0 (X, Y, Zf) in Expression 45 is a viewpoint scattering image SP0 (X, Y, Zf) of each viewpoint P0, the following calculation is performed.

S P 0 ( X , Y , Zf ) = I P 0 ( X , Y , Zf ) + 1 2 × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) [ Expression 53 ]

After completing the processing of step S903 on all viewpoints P0 obtained in step S901, processing proceeds to the viewpoint scattering synthesized image generating step S904.

In the viewpoint scattering synthesized image generating step S904, a prescribed kex (s, t) stored in the main memory 302 or the storage device 130 in advance is acquired, viewpoint scattering images SP0 (X, Y, Zf) of all viewpoints P0 are synthesized, and a viewpoint scattering synthesized image DS (X, Y, Zf) in which the effect of an out-of-focus blur has been suppressed is generated. In other words, the following computation is executed.


DS(X,Y,Zf)=∫∫kex(s,tSP0(X,Y,Zf)dsdt  [Expression 54]

Moreover, kex (s, t) may be acquired or generated based on settings (not shown) in the scattering image synthesis setting (1003). In addition, brightness variation or tone correction can also be performed in step S904 in order to make the obtained viewpoint scattering synthesized image DS (X, Y, Zf) more visible. Although not shown, these settings may be configured in the scattering image synthesis setting (1003).

Furthermore, for an application to Example 1, a configuration based on division such as that described below can also be adopted. In this case, in the viewpoint scattering image extraction processing step S903, a viewpoint scattering image SP0 (X, Y, Zf) is calculated for each viewpoint P0 using Expression 53.

Subsequently, in the viewpoint scattering synthesized image generating step S904, a viewpoint scattering synthesized image DS (X, Y, Zf) in which the effect of an out-of-focus blur has been suppressed is generated by the following computation.

DS ( X , Y , Zf ) = Comp a 1 ( X , Y , Zf ) / Comp a 2 ( X , Y , Zf ) = k a 1 ( s , t ) × S P 0 ( X , Y ) s t / k a 2 ( s , t ) × S P 0 ( X , Y ) s t [ Expression 55 ]

In other words, the viewpoint scattering image SP0 (X, Y, Zf) is integrated for each of two viewpoint weighting functions ka1 (s, t) and ka2 (s, t) and a division of the integrations is performed.

Moreover, as the two viewpoint weighting functions ka1 (s, t) and ka2 (s, t), prescribed settings stored in advance in the main memory 302 or the like may be used or the viewpoint weighting functions may be acquired or generated based on settings (not shown) according to the scattering image synthesis setting (1003).

Furthermore, the configuration of the present example can also be applied to Examples 3 and 4. In other words, by compositing the viewpoint scattering synthesized image DS (X, Y, Zf) according to the present example on a layer image (focus position image) at Z=Zf of a Z stack image or an arbitrary out-of-focus blur image, an image with an enhanced scattered light component can be generated.

In addition, by compositing the viewpoint scattering synthesized image generated in the present example with various images and increasing the scattered light created by surface unevenness, a greater stereoscopic effect can be produced. For example, by adding the viewpoint scattering synthesized image calculated in the present example to an image in which scattered light created by surface unevenness is weak and which gives a flat impression such as a full focused image, an image satisfying both a deep depth of field and a stereoscopic effect due to scattered light can be generated.

(Advantages of Present Example)

With the method according to the present example, by further performing an image composition computation based on a computation (subtraction, division, or the like) or a modification thereof between two focus position and scattering composite images, a component of an out-of-focus blur included in the viewpoint scattering synthesized image can be eliminated or reduced. Accordingly, an observation image can be obtained in which a scattered light component is further enhanced or extracted as compared to the viewpoint scattering synthesized images obtained in Examples 1 and 2. By using such an image, for example, the demands of users desiring to perform image diagnosis or image analysis with a focus on surface unevenness of a sample can be satisfied.

Example 6

In the present example, a method of suppressing the effect of an out-of-focus blur of an object at a position separated from a position in focus included in a viewpoint scattering synthesized image will be described in a similar manner to Example 5.

Hereinafter, a description will be given on a reason for the suppression of the effect of an out-of-focus blur of an object at a position separated from a position in focus included in a viewpoint scattering synthesized image by executing computation between two viewpoint scattering synthesized images.

As already described with reference to Expressions 10 and 11, in Examples 1 and 2, taking advantage of a characteristic that intensity of scattered light varies depending on a difference in viewpoint positions, information on surface unevenness of a sample is extracted by performing computation between viewpoint images with different viewpoint positions.

On the other hand, as described at the beginning of Example 3, when an object exists at a position separated from a position in focus in a Z stack image, an image of an out-of-focus blur of the object is included in a viewpoint scattering synthesized image.

Three viewpoints P0, P1, and P2 with different viewpoint positions will now be considered.

Let us assume that the viewpoints P0, P1, and P2 have equal observation angles φ but respectively different polar angles θ0, θ1, and θ2 and that differences in polar angles among the viewpoints P0, P1, and P2 are determined in advance. In other words, when varying a position of the viewpoint P0, positions of the viewpoints P1 and P2 vary while relative relationships expressed as θ10+t1 and θ20+t2 are maintained by the respective polar angles. (Moreover, when angles of t1 and t2 are expressed between −180 and 180 degrees, |t1|>|t2| holds true. In other words, θ1−θ0 represents a greater difference between polar angles than θ2−θ0).

Although values of t1 and t2 can be arbitrarily set, in the present example, it is assumed that t1=180 degrees and t2=90 degrees. In this case, if the viewpoint position of the viewpoint P0 is denoted by (s, t), then the viewpoint position of the viewpoint P1 is denoted by (−s, −t) and the viewpoint position of the viewpoint P2 is denoted by (−t, s). A difference in polar angles between the viewpoint P0 and the viewpoint P1 is 180 degrees and a difference in polar angles between the viewpoint P0 and the viewpoint P2 is 90 degrees.

Calculating a viewpoint scattering image at the viewpoint P0 using Expression 13 described in Example 1 results in two viewpoint scattering images represented by the expressions below.


Sa(X,Y,Zf)=|IP1(X,Y,Zf)−IP0(X,Y,Zf)|


Sb(X,Y,Zf)=|IP2(X,Y,Zf)−IP0(X,Y,Zf)|  [Expression 56]

As described with reference to Expression 10, the greater (the closer to 180 degrees) the difference in polar angles between two viewpoints, the greater the intensity of a scattered light component that can be extracted by subtraction or division of a viewpoint image. In the case of Expression 56, since θ1−θ0 is 180 degrees and θ2−θ0 is 90 degrees, intensity of a scattered light component included in the image is greater in a viewpoint scattering image Sa (X, Y, Zf) than in a viewpoint scattering image Sb (X, Y, Zf).

The viewpoint scattering images Sa (X, Y, Zf) and Sb (X, Y, Zf) are calculated for various viewpoints while varying positions of the viewpoints P0, P1, and P2, and the calculated viewpoint scattering images Sa (X, Y, Zf) and Sb (X, Y, Zf) are synthesized to generate viewpoint scattering synthesized images DSa (X, Y, Zf) and DSb (X, Y, Zf). Naturally, the intensity of a scattered light component included in the image is greater in the viewpoint scattering synthesized image DSa (X, Y, Zf) than in the viewpoint scattering synthesized image DSb (X, Y, Zf).

Next, the effect of an out-of-focus blur of an object at a position separated from a position in focus in the viewpoint scattering synthesized images DSa (X, Y, Zf) and DSb (X, Y, Zf) will be considered. As described in Example 3, Expression 26 for calculating a viewpoint scattering synthesized image can be modified to Expression 28. Expression 28 shows that the effect of an out-of-focus blur of an imaging optical system in a viewpoint scattering synthesized image is not dependent on a difference in polar angles between two viewpoints that are used for calculating a viewpoint scattering image. In other words, a component of an out-of-focus blur of an object separated from a position in focus hardly varies between the viewpoint scattering synthesized images DSa (X, Y, Zf) and DSb (X, Y, Zf).

As shown, the viewpoint scattering synthesized images DSa (X, Y, Zf) and DSb (X, Y, Zf) have properties such that, while the intensity of a scattered light component created by unevenness on a sample surface is different, the intensity of an out-of-focus blur component due to an imaging optical system is approximately the same. Therefore, by performing the computation represented by Expression 57 or 58 below or, in other words, by calculating a difference or a ratio between a first synthesized image DSa (X, Y, Zf) and a second synthesized image DSb (X, Y, Zf), an image DS (X, Y, Zf) in which an out-of-focus blur has been reduced and information on scattered light created by surface unevenness has been extracted or enhanced can be generated.


DS(X,Y,Zf)=DSa(X,Y,Zf)−DSb(X,Y,Zf)  [Expression 57]


DS(X,Y,Zf)=DSa(X,Y,Zf)/DSb(X,Y,Zf)  [Expression 58]

Moreover, since θ1−θ0 represents a greater difference in polar angles between viewpoints than θ2−θ0, intensity of scattered light satisfies DSa (X, Y, Zf)>DSb (X, Y, Zf). Therefore, a pixel value that has fallen below 0 in the computation of Expression 57 may be set to 0. Accordingly, a noise component unrelated to information on scattered light can be suppressed.

Hereinafter, a case where the computations described by Expressions 56 to 58 are applied to the configuration of Example 1 will be described.

Processing will be described using the flow chart of the viewpoint-decomposed scattering image extraction/synthesis processing step S802 shown in FIG. 9.

First, in the viewpoint acquisition processing step S901, positional information of a viewpoint necessary for generating a viewpoint image is acquired via the viewpoint decomposition setting (1001). Since viewpoint positions of viewpoints P1 and P2 are required in addition to that of a viewpoint P0 in the subsequent step S902, the positions of the viewpoints P1 and P2 are also calculated based on the position of the viewpoint P0. t1 denoting a difference in polar angles between the viewpoints P0 and P1 and t2 denoting a difference in polar angles between the viewpoints P0 and P2 may be fixed values (values determined in advance) or can be specified by the user.

Next, in the viewpoint image generating step S902, viewpoint images of all viewpoints P0 acquired in step S901 are calculated. In addition, viewpoint images are also calculated for the viewpoints P1 and P2 corresponding to the respective viewpoints P0. Details will be omitted.

Next, in the viewpoint scattering image extraction processing step S903, two viewpoint scattering images Sa (X, Y, Zf) and Sb (X, Y, Zf) are calculated for each viewpoint P0 using Expression 56. Operational expressions other than Expression 56 can also be configured via the scattering image extraction setting (1002). After completing the processing of step S903 on all viewpoints P0 obtained in step S901, processing proceeds to the viewpoint scattering synthesized image generating step S904.

In the viewpoint scattering synthesized image generating step S904, based on the settings in the scattering image synthesis setting (1003), the viewpoint scattering images Sa (X, Y, Zf) of all viewpoints P0 are synthesized to generate a viewpoint scattering synthesized image DSa (X, Y, Zf). In addition, the viewpoint scattering images Sb (X, Y, Zf) of all viewpoints P0 are synthesized to generate a viewpoint scattering synthesized image DSb (X, Y, Zf). Subsequently, the computation represented by Expression 57 or 58 is executed to generate a new viewpoint scattering synthesized image DS (X, Y, Zf) in which the effect of an out-of-focus blur of an object separated from a position in focus has been suppressed.

Moreover, brightness variation or tone correction can also be performed in step S904 in order to make the obtained viewpoint scattering synthesized image DS (X, Y, Zf) more visible. Although not shown, these settings may be configured via the scattering image synthesis setting (1003).

In addition, for an application to Example 1, processing equivalent to the computation represented by Expression 57 can be realized by using the following expression in place of Expression 13 in the viewpoint scattering image extraction processing step S903. Expression 59 represents processing for calculating a difference between a viewpoint scattering image (a first intermediate image) obtained from a first viewpoint image IP0 (X, Y, Zf) and a second viewpoint image IP1 (X, Y, Zf) and a viewpoint scattering image (a second intermediate image) obtained from the first viewpoint image IP0 (X, Y, Zf) and a third viewpoint image IP2 (X, Y, Zf).


S(X,Y,Zf)=|IP1(X,Y,Zf)−IP0(X,Y,Zf)|−|IP2(X,Y,Zf)−IP0(X,Y,Zf)|  [Expression 59]

In order to realize processing equivalent to the computation represented by Expression 58, a difference (subtraction) between the two viewpoint scattering images on the right side of Expression 59 may be replaced with a ratio (division) between the two viewpoint scattering images. In addition, while a viewpoint scattering image is calculated based on a difference (subtraction) between two viewpoint images in Expression 59, a viewpoint scattering image may also be calculated based on a ratio (division) between two viewpoint images. Furthermore, an image S (X, Y, Zf) obtained by the computation represented by Expression 59 may be used as an observation image (a sixth observation image).

Moreover, the configuration of the present example can also be applied to Examples 3 and 4. In other words, by compositing the viewpoint scattering synthesized image DS (X, Y, Zf) according to the present example on a layer image (focus position image) at Z=Zf of a Z stack image or an arbitrary out-of-focus blur image, an image with an enhanced scattered light component can be generated. In addition, in a similar manner to Example 5, by compositing the viewpoint scattering synthesized image generated in the present example with various images such as a full focused image and increasing the scattered light created by surface unevenness, a stereoscopic effect can be produced.

(Advantages of Present Example)

With the method according to the present example, by further performing a computation (subtraction, division, or the like) between two viewpoint scattering synthesized images, a component of an out-of-focus blur included in the viewpoint scattering synthesized image can be eliminated or reduced. Accordingly, a similar effect to Example 5 can be produced.

Example 7

In Examples 3 and 4, methods of generating a focus position and scattering composite image have been described. In the present example, a description will be given on a method of increasing flexibility of out-of-focus blur reduction and enhancement of information on scattered light in a focus position and scattering composite image by further compositing the viewpoint scattering synthesized images described in Examples 5 and 6 in which an out-of-focus blur component has been suppressed on the images according to Examples 3 and 4.

As described earlier, the focus position and scattering composite images generated in Examples 3 and 4 are respectively represented by Expressions 35 and 39.

In the present example, a product of the viewpoint scattering synthesized image calculated in Example 5 or 6 in which out-of-focus blur has been suppressed multiplied by a composition coefficient β is added to the right sides of Expressions 36 and 39 to calculate focus position and scattering composite images. This operation is represented by the expressions below.


Comp(X,Y,Zf)=g(X,Y,Zf)+α×DS(X,Y,Zf)+β×DSX(X,Y,Zf)  [Expression 60]


Comp(X,Y,Zf)=a(X,Y,Zf)+α×DS(X,Y,Zf)+β×DSX(X,Y,Zf)  [Expression 61]

In the expressions above, it is assumed that DS (X, Y, Zf) is a viewpoint scattering synthesized image calculated using the method according to Example 1 and DSX (X, Y, Zf) is a viewpoint scattering synthesized image calculated using the method according to Example 5 or 6. Moreover, g (X, Y, Zf) denotes a focus position image and a (X, Y, Zf) denotes an arbitrary out-of-focus blur image at Z=Zf.

Hereinafter, in accordance with the configurations of Examples 3 and 4, a method of generating a focus position and scattering composite image according to the present example will be described.

(Application to Configuration of Example 3)

In the case of an application to the configuration of Example 3, the computation represented by Expression 60 is executed in the focus position and scattering composite image generation processing step S2204 within the image composition processing step S1704.

Moreover, a composition coefficient β necessary for the computation represented by Expression 60 and values necessary for calculating DSX (X, Y, Z) (for example, a viewpoint weighting function kex (s, t) for extracting scattered light information) can be configured via setting items (not shown) in the image composition setting 2320.

(Application to Configuration of Example 4)

In the case of an application to the configuration of Example 4, for example, when a focus position and scattering composite image CP0 (X, Y, Zf) of each viewpoint is represented by Expression 43 and a viewpoint scattering image SP0 (X, Y, Zf) in which out-of-focus blur has been reduced is represented by Expression 53, a focus position and scattering composite image Comp (X, Y, Zf) can be modified as follows.


Comp(X,Y,Zf)=∫∫{α×ka(s,t)+β×kex(s,t)}×CP0(X,Y,Zf)dsdt  [Expression 62]

However, CP0 (X, Y, Zf) is represented by the following expression.

C P 0 ( X , Y , Zf ) = I P 0 ( X , Y , Zf ) + 1 2 × I P 1 ( X , Y , Zf ) - I P 0 ( X , Y , Zf ) [ Expression 63 ]

Therefore, in the present example, the computation represented by Expression 63 is executed in the per-viewpoint step S2403 within the focus position and scattering composite image generation processing step S2204 in order to calculate the focus position and scattering composite image CP0 (X, Y, Zf) of each viewpoint.

After completing the calculation of step S2403 on all viewpoints set in step S2401, processing proceeds to the focus position and scattering composite image generating step S2404 to execute the computation represented by Expression 62.

Moreover, the processing flow described above is simply an example and the computation described in Example 5 or 6 can be applied.

In the configuration described above, the composition coefficient α can be used as a parameter for canceling an out-of-focus blur and the composition coefficient β can be used as a parameter for enhancing information on scattered light created by surface unevenness.

For example, in a case where “bright-field observation” is set as the “sample observation condition” in Example 3 or 4, excessive edge enhancement occurs when increasing the composition coefficient α in order to enhance information on scattered light created by surface unevenness. However, with the method according to the present example, by setting α in a vicinity of 1 and increasing β, both cancelation of an out-of-focus blur and enhancement of information on scattered light created by surface unevenness can be achieved.

In addition, when “fluorescence observation is set as the “sample observation condition” in Example 3, cancelation of an out-of-focus blur and addition of scattered light created by surface unevenness cannot be achieved at the same time solely by modifying the composition coefficient α. However, with the method according to the present example, by setting α in a vicinity of −1 and setting β to an appropriate value, both cancelation of an out-of-focus blur and addition of scattered light created by surface unevenness can be achieved at the same time.

(Advantages of Present Example)

With the method according to the present example, by compositing a viewpoint scattering synthesized image in which an out-of-focus blur component has been suppressed on the focus position and scattering composite image described in Example 3 or 4, flexibility of out-of-focus blur reduction and enhancement of information on scattered light created by surface unevenness can be increased. Accordingly, an observation image can be obtained in which information on scattered light is further enhanced or extracted as compared to the focus position and scattering composite image obtained in Example 3 or 4. By using such an image, for example, the demands of users desiring to perform image diagnosis or image analysis by enhancing information on surface unevenness of a sample while suppressing an out-of-focus blur can be satisfied.

While preferable embodiments of the present invention have been described above, configurations of the present invention are not limited to these examples.

For example, while a case in which a Z stack image taken with a bright-field microscope is used as an original image has been described in the examples above, the present invention is also applicable to images taken with an epi-illumination microscope, a light field camera, a light field microscope, and the like.

In addition, while a pathological sample has been described in the examples above as an example of a subject, subjects are not limited to pathological samples. The subject may be a reflective object such as metal that is an observation object of an epi-illumination microscope. The subject may also be a transparent biological specimen that is an observation object of a transmissive observation microscope. In any case, by using the technique disclosed in Patent Literature 1 and the like, an arbitrary viewpoint image can be generated from a group of a plurality of layer images taken while varying focusing positions in a depth direction of the subject and the present invention can be applied. When using an original image obtained by imaging a reflective object, while the original image includes an image of a reflected light (specular reflection) component and an image of a scattered light component, scattered light becomes dominant in a non-glossy subject such as paper. In this case, by performing processing similar to that in the examples described above, a scattered light component can be extracted or enhanced.

Furthermore, configurations described in the respective examples may be combined with each other. For example, when extracting a viewpoint scattering image in the viewpoint scattering image extraction processing step S903, a viewpoint that is a processing object, a polar angle-rotated viewpoint, and an observation angle-modified viewpoint may be used to calculate respective viewpoint scattering images thereof and an addition or the like may be performed among the viewpoint scattering images to increase intensity or reliability of the viewpoint scattering images.

In addition, while the examples described above assume that computations for generating a viewpoint image, a viewpoint scattering image, and the like are to take place in real space, similar processing may be computed in a frequency space. In other words, in the present specification, the term “image” is a concept that includes both images in real space and images in a frequency space.

Furthermore, while computations of images are represented by mathematical expressions in the examples described above, in actual processing, calculations need not necessarily be performed exactly according to the mathematical expressions. Specific processing and algorithms may be designed in any way as long as images corresponding to the results of computations represented by the mathematical expressions are obtained.

Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment(s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment (s). The computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

This application claims the benefit of Japanese Patent Application No. 2012-268173, filed on Dec. 7, 2012, Japanese Patent Application No. 2013-188641, filed on Sep. 11, 2013, Japanese Patent Application No. 2013-167437, filed on Aug. 12, 2013, and Japanese Patent Application No. 2013-204636, filed on Sep. 30, 2013, which are hereby incorporated by reference herein in their entirety.

Claims

1. (canceled)

2. An image generating apparatus which generates an observation image from an original image obtained by imaging a subject,

the image generating apparatus comprising:
a viewpoint image generating unit configured to generate a plurality of viewpoint images having mutually different line-of-sight directions by using the original image; and
an observation image generating unit configured to generate an image, for which a difference among the plurality of viewpoint images has been extracted or enhanced, as the observation image by using the plurality of viewpoint images.

3. (canceled)

4. The image generating apparatus according to claim 2, wherein the observation image generating unit generates an image corresponding to a difference or a ratio between a first viewpoint image and a second viewpoint image, whose line-of-sight direction differs from that of the first viewpoint image, as a first observation image.

5. The image generating apparatus according to claim 4, wherein the observation image generating unit generates an image corresponding to a result of synthesizing a plurality of first observation images, in which combinations of the first viewpoint image and the second viewpoint image differ from one another, as a second observation image.

6. (canceled)

7. The image generating apparatus according to claim 5, wherein the observation image generating unit generates as a third observation image an image corresponding to a result of acquiring a focus position image, for which a given position in a depth direction of the subject has been brought into focus from the original image, and compositing the second observation image on the focus position image.

8. The image generating apparatus according to claim 7, wherein the third observation image is an image corresponding to a result of adding the second observation image multiplied by a composition coefficient to the focus position image.

9. (canceled)

10. The image generating apparatus according to claim 4, wherein the observation image generating unit generates as a fourth observation image an image corresponding to a result of performing processing for compositing the first observation image on the first viewpoint image a plurality of times by changing combinations of the first viewpoint image and the second viewpoint image and synthesizing the obtained plurality of composite images.

11. The image generating apparatus according to claim 10, wherein

the fourth observation image is an image corresponding to a result of weighting the plurality of composite images in accordance with the line-of-sight directions of the first viewpoint images used to generate the respective composite images and synthesizing the plurality of weighted composite images.

12.-16. (canceled)

17. The image generating apparatus according to claim 4, wherein

the observation image generating unit
generates a plurality of composite images by performing processing for compositing the first observation image on the first viewpoint image a plurality of times by changing combinations of the first viewpoint image and the second viewpoint image, and
generates a fifth observation image by weighting the plurality of composite images in accordance with the line-of-sight directions of the first viewpoint images used to generate the respective composite images and synthesizing the plurality of weighted composite images, wherein
when an angle formed between an axis parallel to a depth direction of the subject and a line-of-sight direction is referred to as an observation angle,
the weight in accordance with the line-of-sight direction is designed such that a sum of weights with respect to all line-of-sight directions to be synthesized is 0 and a sum of weights with respect to line-of-sight directions with an observation angle equal to or greater than a prescribed value is greater than 0.

18. (canceled)

19. The image generating apparatus according to claim 2, wherein

the observation image generating unit generates, as a sixth observation image, an image corresponding to a difference or a ratio between a first intermediate image and a second intermediate image, wherein the first intermediate image is an image corresponding to a difference or a ratio between the first viewpoint image and a second viewpoint image whose line-of-sight direction differs from that of the first viewpoint image, and the second intermediate image is an image corresponding to a difference or a ratio between the first viewpoint image and a third viewpoint image whose line-of-sight direction differs from those of both first and second viewpoint images.

20.-22. (canceled)

23. The image generating apparatus according to claim 4, wherein

when an angle around an axis that is parallel to a depth direction of the subject is referred to as a polar angle,
the first viewpoint image and the second viewpoint image include viewpoint images having different polar angles of the line-of-sight direction.

24. (canceled)

25. The image generating apparatus according to claim 4, wherein

when an angle formed between an axis parallel to a depth direction of the subject and a line-of-sight direction is referred to as an observation angle,
the first viewpoint image and the second viewpoint image include viewpoint images having different observation angles of the line-of-sight direction.

26.-27. (canceled)

28. The image generating apparatus according to claim 2, wherein

the original image is
(1) constituted by a plurality of layer images obtained by imaging the subject while varying focusing positions in a depth direction, or
(2) an image on which a light field is recorded.

29. (canceled)

30. The image generating apparatus according to claim 2, wherein

the subject is a prepared slide, and
the original image is an image obtained by imaging the prepared slide in use of a microscope.

31. (canceled)

32. An image generating method of generating an observation image by using a computer from an original image obtained by imaging a subject,

the image generating method comprising:
generating a plurality of viewpoint images with respectively different line-of-sight directions by using the original image; and
generating an image, for which a difference among the plurality of viewpoint images has been extracted or enhanced, as the observation image by using the plurality of viewpoint images.

33. A non-transitory computer readable storage medium storing a program that causes a computer to execute the respective steps of the image generating method according to claim 32.

Patent History
Publication number: 20150310613
Type: Application
Filed: Nov 28, 2013
Publication Date: Oct 29, 2015
Inventor: Tomochika Murakami (Ichikawa-shi)
Application Number: 14/649,330
Classifications
International Classification: G06T 7/00 (20060101);