Image quality

A magnified image is improved by integrating the wavelength specific component into that image. A magnified images obtained, and at least one wavelength specific component images also obtained. The different images are converted in color space, and different channels, indicative of the different parts of the image shows, are also obtained. For example, the image may be converted to and L*a*b* color space, and the luminance channel of the wavelength specific component may be used to enhance or replace the luminance channel of the magnified image.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application Ser. No. 60/716,887, filed on Sep. 13, 2005. The disclosure of the prior application is considered part of (and is incorporated by reference in) the disclosure of this application.

BACKGROUND

Pathology often requires viewing microscope images. The resolution of the microscope images from the imaging system. This is often limited by different parameters of obtaining the image. For example, the resolution may be limited by the time it takes to scan a tissue section and by the resulting image file size.

DESCRIPTION OF THE DRAWINGS

FIG. 1 shows a flowchart;

FIG. 2 illustrates the progression of the different image;

FIG. 3 illustrates an exemplary hardware setup which can be used;

FIG. 4a and 4b show examples of the different images for colon cancer; and

FIG. 5a through 5d show examples of the different images for a breast cancer cell.

DETAILED DESCRIPTION

The general structure and techniques, and more specific embodiments which can be used to effect different ways of carrying out the more general goals, are described herein.

The number of image elements within an obtained image from a tissue section increases exponentially between different microscope objectives. For example, the image at 10× may require exponentially more storage than the image at 4×. The time that is required to scan the tissue section at 60× may be excessive. Therefore, many believe that capturing a large image at 60× is not practical. The time required to scan a tissue section at 60× is extremely large, and the amount of digital storage space required for such a scan is also large. This may limit the number of scans that can be obtained and reviewed.

An embodiment describes use of a multi spectral imaging system, such as the Nuance Multispectral Imaging System available from Cambridge Research & Instrumentation (“Nuance”) in combination with an automated microscope such as the Automated Cellular Imaging System (“ACIS”) provided by Clarient Inc. The processing provides an effective augmentation of images at lower magnifications, to attempt to obtain additional information from those images at lower magnifications. In an embodiment, image augmentation is carried out by extracting images at specific color wavelengths, converting color spaces, and carrying out channel mixing in a converted color space.

FIG. 3 illustrates an exemplary hardware setup which can be used. The sample 300 is on a sample table 305 as conventional. The ACIS or other automated microscope 310 obtains image information from the sample 300. A single spectrum camera 315 also obtains information. All of the information is coupled to a computer 320 which operates as described herein and specifically according to the flowcharts of FIGS. 1 and 2.

A color space is a model for representing color in terms of intensity values; a color space specifies how color information is represented. It defines a one, two, three, or four-dimensional space whose dimensions, or components, represent intensity values. A color component is also referred to as a color channel. For example, RGB space is a three-dimensional color space whose components are the red, green, and blue intensities that make up a given color. Visually, these spaces are often represented by various solid shapes, such as cubes, cones, or polyhedral.

Different kinds of color spaces are known.

Gray spaces typically have a single component, ranging from black to white. The RGB space is a three-dimensional color space whose components are the red, green, and blue intensities that make up a given color. For example, scanners read the amounts of red, green, and blue light that are reflected from an image and then convert those amounts into digital values. Displays receive the digital values and convert them into red, green, and blue light seen on a screen.

RGB-based color spaces are the most commonly used color spaces in computer graphics, primarily because they are directly supported by most color displays and scanners. RGB color spaces are device dependent and additive. The groups of color spaces within the RGB base family include HSV (hue, saturation, value) and HLS (hue, lightness, saturation) spaces. The saturation component in both color spaces describes color intensity. A saturation value of 0 (in the middle of a hexagon) means that the color is “colorless” (gray); a saturation value at the maximum (at the outer edge of a hexagon) means that the color is at maximum “colorfulness” for that hue angle and brightness. The value component (in HSV space) and the lightness component (in HLS space) describe brightness or luminance. In both color spaces, a value of 0 represents black. In HSV space, a maximum value means that the color is at its brightest. In HLS space, a maximum value for lightness means that the color is white, regardless of the current values of the hue and saturation components. The brightest, most intense color in HLS space occurs at a lightness value of exactly half the maximum.

CMY color spaces are like the above, but define the colors additively.

Any color expressed in RGB space is some mixture of three primary colors: red, green, and blue. Most RGB-based color spaces can be visualized as a cube, with corners of black, the three primaries (red, green, and blue), the three secondaries (cyan, magenta, and yellow), and white.

Some color spaces can express color in a device-independent way. Whereas RGB colors vary with display and scanner characteristics, and CMYK colors vary with printer, ink, and paper characteristics, device-independent colors are meant to be true representations of colors as perceived by the human eye. These color representations, called device-independent color spaces, result from work carried out in 1931 by the Commission Internationale d'Eclairage (CIE), and for that reason are also also called CIE-based color spaces.

In the L*a*b color space, the L*a*b* space consists of a luminosity ‘L*’ or brightness layer, chromaticity layer ‘a*’, indicating where color falls along the red-green axis, and chromaticity layer ‘b*’ indicating where the color falls along the blue-yellow axis.

An embodiment is described herein. The embodiment can be carried out automatically using a robotic or computer-controlled system. Alternatively, some parts of the embodiment, such as the staining or the input of data into machines, can be carried out manually.

At 100, the system obtains a number of different images, including a first microscopic image, and at least one single spectrum image. Preferably, a plurality of different single spectrum images are obtained. FIG. 2 also illustrates the different images, including the color image 200 from the microscope or from the spectral camera, and a single spectrum image 205 from the spectral camera.

Colon cancer tissue sections may be examined in this embodiment. These sections are in fixed paraffin and stained with HER2 stain. A multispectral camera, which in the embodiment can be the Nuance camera, is used to examine the tissue sections. The Nuance camera is mounted on an ACIS microscope.

For purposes of the embodiment, color RGB images are obtained at any magnification, e.g., 4×, 10×, 20× and/or 60×. Grayscale images of the exact same fields are also captured at near ultraviolet (420 nm) and near infrared (720 nm). Physics dictates that resolution is inversely proportional to wavelength. One would therefore predict that the 420 nm image would have better resolution than any of the RGB channels of the original color images.

The images obtained from the cameras are in an RGB based color space. At 110, the images are converted into a device independent color space which includes a luminance component. More specifically, in the embodiment, the devices are converted into the L*a*b* color space. FIG. 2 shows the image 200 being converted into the new color space image 210, and the image 205 being converted into the new color space image 215. This color space conversion may use commercially available software or modules.

At 120, the channels of the new images are separated. In FIG. 2, image 210 is divided into separated channels, the L* channel 220, the a* channel 221 and the b* channel 222. Similarly, the image 215 is converted into its separate channels representing separate image parts, 225, 226 and 227.

In the embodiment, only the luminance information from the single spectrum image 205 is used. Accordingly, at 130, the luminance channels from the image 200 are replaced by the luminance channel from their corresponding 420 nm image 205. The channels are then premixed at 140 to create the image 240, and then are transformed back to another color space transformation at 150 such as RGB, HSI, or any other color space of a type that may facilitate the display.

According to the embodiment, it was found that the new image provided more detailed than the original. In order to test the importance of the 420 nm image, the same process was done using a 720 nm spectral image in place of the 420 nm image. The resulting images were of poor quality.

Another embodiment tested immunohistochemical stained tissue. This tissue test was a breast-cancer test tissue stained with Her2/neu, using diainobenzidine (“DAB”) secondary, and a hematoxylin counterstain. Surprisingly, this process increased the detail of the hematoxylin stained counterstain tissue but greatly reduced the information carried by the stained cancer tissue, which became less interpretable.

The inventor believes that the brown DAB based secondary stain contains a great deal of red color. Therefore, the 720 nm process was applied with very good results. The DAB stained tissue showed an increase detail at ends the background of slightly decreased background detail.

Therefore, the different convergences between different kinds of color are important. FIGS. 4a and 4b show examples of the different images for colon cancer. FIGS. 5a through 5d show examples of the different images for a breast cancer cell.

The embodiment describes only two different single spectrum images, but other embodiments may use a different luminance convert step 130 which uses a plurality of different single spectrum images, or some kind of combined single spectrum image which is combined by transforming and weighting the number of different images together.

The general structure and techniques, and more specific embodiments which can be used to effect different ways of carrying out the more general goals are described herein.

Although only a few embodiments have been disclosed in detail above, other embodiments are possible and the inventor (s) intend these to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in another way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, other stains and colors may be used. Other single spectrum images, or images that are multispectrum or narrow spectrum can also be used. Moreover, when specific values, such as 420 nm, are given herein, those specific values are intended to be center values within a range of 10-20%, for example.

Also, the inventor intends that only those claims which use the words “means for” are intended to be interpreted under 35 USC 112, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims. The computers described herein may be any kind of computer, either general purpose, or some specific purpose computer such as a workstation. The computer may be a Pentium class computer, running Windows XP or Linux, or may be a Macintosh computer. The computer may also be a handheld computer, such as a PDA, cellphone, or laptop.

The programs may be written in C, or Java, Brew or any other programming language. The programs may be resident on a storage medium, e.g., magnetic or optical, e.g. the computer hard drive, a removable disk or media such as a memory stick or SD media, or other removable medium, the programs may also be run over a network, for example, with a server or other machine sending signals to the local machine, which allows the local machine to carry out the operations described herein.

Claims

1. A method, comprising:

obtaining a first image comprising a microscope image indicative of a sample;
obtaining a second image indicative of the same sample but which covers substantially only a single spectrum;
dividing said second image into component parts indicative of the second image; and
using at least one of said component parts to enhance said first image.

2. A method as in claim 1, wherein said information from said second image is a luminance component from the second image which is used to modify a luminance component in the first image.

3. A method as in claim 1, further comprising converting said microscope image in said second image to a second color space, separating channels in said separate color space, and using at least one of said channels from said second image to replace at least one of said channels in said microscope image.

4. A method as in claim 3, wherein said second color space is a device independent color space.

5. A method as in claim 3, wherein said second color space is an L*a*b* color space.

6. A method as in claim 3, after said using, combining said channels to form a new image, and transforming said new image to another color space.

7. A method as in claim 1, wherein said second image is at substantially 420 nm.

8. A method as in claim 1, wherein said second image is at 720 nm.

9. A device comprising:

a computer, obtaining an image of a microscope, and obtaining another image having a relationship to said microscope, but over substantially only a single spectrum, said computer operating to provide information from the another image into component parts indicative of the another image and to use information from at least one of said component parts, but not all of said component parts to enhance said image from said microscope.

10. A device as in claim 9, further comprising a microscope, producing said image.

11. A device as in claim 9, further comprising a single spectrum camera, producing at least one output indicative of a single spectrum image.

12. A device as in claim 10, further comprising a multi-spectrum camera, which produces a number of output images, each representative of a single spectrum image.

13. A device as in claim 9, wherein said at least one component part is a component part indicative of luminance within the image.

14. A device as in claim 9, wherein said computer further operates to convert said microscope image and said another image, into a device independent color space.

15. A device as in claim 14, wherein said second color space is an L*a*b* color space.

16. A device as in claim 9, wherein said computer operates to combine said component parts to form a new image, and to transform said new image to another color space.

17. A method as in claim 11, wherein said single spectrum image is at substantially 420 nm.

18. A method as in claim 11, wherein said single spectrum image is at 720 nm.

19. A method, comprising:

obtaining a first image from a first camera indicative of full color image of a magnified sample;
obtaining a second image indicative of a single spectrum image of the magnified sample;
converting said first and second images into a device independent color space;
separating channels of the first and second images in said device independent color space;
using at least one of said channels of the second image to enhance a quality of said first image; and
creating a new image based on said first image, and said at least one of said channels.

20. A method as in claim 19, wherein said using comprises using said one channel from said a second image to replace a corresponding channel in said first image.

Patent History
Publication number: 20070091109
Type: Application
Filed: Sep 12, 2006
Publication Date: Apr 26, 2007
Inventor: Roscoe Atkinson (San Juan Capistrano, CA)
Application Number: 11/520,444
Classifications
Current U.S. Class: 345/589.000
International Classification: G09G 5/02 (20060101);