METHODS AND SYSTEMS FOR MEASURING AND CORRECTING ELECTRONIC VISUAL DISPLAYS
The present disclosure relates to methods and systems for measuring and correcting electronic visual displays. A method in accordance with one embodiment of the present technology includes generating a series of patterns for illuminating proper subsets of the light emitting elements of the display, such as regular grids of nonadjacent activated light emitting elements with the elements in between deactivated. For each generated pattern, an imaging device captures information about the activated light emitting elements. A computing device analyzes the captured information, comparing the output of the activated light emitting elements to target output values, and determines correction factors to calibrate the display to better achieve the target output values. In some embodiments, the correction factors may be uploaded to firmware controlling the display or used to process images to be shown on the display.
The present disclosure relates generally to electronic visual displays, and more particularly, to methods and systems for measuring and calibrating the output from such displays.
BACKGROUNDElectronic visual displays (“displays”) have become commonplace. Displays of increasingly high resolution are used in a wide variety of contexts, from personal electronics with screens a few inches or smaller in size to computer screens and televisions several feet across to scoreboards and billboards covering hundreds of square feet. Some displays are assembled from a series of smaller panels, each of which may further consist of a series of internally connected modules. Virtually all displays are made up of arrays of individual light-emitting elements called “pixels.” In turn, each pixel is made up of a plurality of light-emitting points (e.g., one red, one green, and one blue). The light-emitting points are termed “subpixels.”
It is often desirable for a display to be calibrated. For example, calibration may improve the uniformity of the display and improve consistency between displays. During calibration of a display (or, e.g., of each module of a display), the color and brightness of each pixel or subpixel is measured. Adjustments are determined so the pixels can display particular colors at desired brightness levels. The adjustments are then stored (e.g., in software or firmware that controls the display or module), so that those adjustments or correction factors can be applied.
The following disclosure describes electronic visual display calibration systems and associated methods for measuring and calibrating electronic visual displays. As described in greater detail below, a display measurement method and/or system configured in accordance with one aspect of the disclosure is configured to measure the luminance and the color of the individual pixels or subpixels of an electronic visual display, such as a high-resolution liquid crystal display (“LCD”) or an organic light-emitting diode (“OLED”) display.
The inventors have recognized that when pixels are very closely spaced, such as is typical in many LCDs, OLED displays, and high resolution light-emitting diode (“LED”) displays, measuring individual pixel or subpixel attributes becomes more difficult. Accordingly, embodiments of the present technology use a pattern generator (e.g., standalone hardware test equipment, a logic analyzer add-on module, a computer peripheral, software in a computing device or controller connected to the display, output from a serial digital interface (“SDI”), digital video interface (“DVI”) or high-definition multimedia interface (“HDMI”) port, etc.) to display only a desired subset of pixels or subpixels to be measured. In some embodiments, for example, the pattern generator illuminates only every third or every fourth pixel of the display, such that the pixels between them remain off. The technology uses an imaging device (which typically has a considerably higher resolution than the display itself) to measure only the illuminated pixels (and/or subpixels). Because only a subset of the pixels are illuminated and measured at once, the display under test effectively has a much lower pixel resolution. After measuring the illuminated pixels, the pattern can then be shifted (e.g., by one pixel) and then measurements can be repeated until all of the pixel of the display have been measured.
In one particular embodiment, for example, if every fifth pixel of a 1,920×1,080 pixel high definition television (“HDTV”) display is illuminated at a time, then the effective resolution is 384×216 pixels. To measure the illuminated pixels with an imaging device having a resolution about six times greater than the display's pixel resolution, a camera with a resolution of approximately 2,300×1,300—i.e., a camera readily available for a reasonable price—could potentially be used. In contrast with the present technology, however, many conventional approaches for analyzing the 1,920×1,080 pixel HDTV display would require a camera having a resolution of approximately 12,000×6,000, or 72,000,000 pixels. Such a camera (with resolution high enough for the display to be measured) is expected to be prohibitively expensive and/or unavailable. As a result, measuring and calibrating such displays using conventional techniques is often impractical and/or too expensive.
Another conventional approach for measuring such large or high-resolution displays is to divide the display (or its constituent panels or modules) into sections small enough that the imaging system has sufficient resolving power to enable an accurate measurement of the pixels or subpixels of each section. Using this approach, the imaging device (or the display being measured) is generally mounted on an x-y stage for horizontal and vertical positioning, or rotated to align to each section being measured. Moving or rotating either the camera or the display, however, requires additional, potentially expensive additional equipment, as well as time to perform the movement or rotation and to align the imaging device to the display. Furthermore, this technique can lead to slight mismatches or discontinuities of measurement between the individual sections. If the measurements are used for uniformity correction, such mismatches must be addressed, typically with further measurements and/or post-processing the display measurement data.
In contrast with conventional techniques, embodiments of the present technology are expected to enable precise measurement of individual pixel or subpixel output for any display (e.g., an OLED display) without requiring expensive, high resolution imaging devices, and without additional equipment for moving the relationship between the imaging device and the display, time for moving and aligning them, or mismatches between sections of the display.
Certain details are set forth in the following description and in
Many of the details, dimensions, angles, and other features shown in the Figures are merely illustrative of particular embodiments of the disclosure. Accordingly, other embodiments can have other details, dimensions, angles, and features without departing from the spirit or scope of the present disclosure. In addition, those of ordinary skill in the art will appreciate that further embodiments of the disclosure can be practiced without several of the details described below.
B. Embodiments of Electronic Visual Display Calibration Systems and Associated Methods for Calibrating Electronic Visual DisplaysIn the embodiment illustrated in
In some embodiments, the correction factors for the display 150 are applied to the firmware and/or software controlling the display 150 to calibrate the display 150. In alternate embodiments, the corrections are applied in real time to a video stream to be shown on the display 150. In such embodiments, the technology includes comparing the actual display value with a desired display value for the one or more portions of the display 150, and determining a correction factor for the pixels or subpixels of the display 150 as determined from the measurements of the patterns 160 described above. The technology processes or adjusts the image with the correction factors for the corresponding pixels of the display 150. After processing the image to account for variations in the display 150, the technology can further include transmitting the image to the display 150 and showing the image on the display 150. Accordingly, in some embodiments, the image on the display 150 can be presented according to the desired display values without modifying or calibrating the actual display 150.
One of ordinary skill in the art will understand that although the system 100 illustrated in
The test pattern generator 210 is configured to generate a series of test patterns 260, each of which illuminates a proper subset of the pixels or subpixels of the display 250. The test station 240 is configured to capture a series of images from an imaging area covering all of the display 250. The captured image data is transferred from the test station 240 to the interface 230. The interface 230 compiles and manages the image data, performs a series of calculations to determine the appropriate correction factors that should be made to the image data, and then stores the data. This process is repeated until images of each of the pixels or subpixels of display 250 have been obtained. After collection of all the necessary data, the processed correction data is then uploaded from the interface 230 to the firmware and/or software controlling the display 250 and used to recalibrate the display 250.
In the embodiment illustrated in
In the illustrated embodiment, the test station 240 also incorporates a ground glass diffuser 246 positioned just above the display 250. The diffuser 246 scatters the light emitted from each subpixel in the display 250, which effectively partially integrates the emitted light angularly. Accordingly, the camera 220 is actually measuring the average light emitted into a cone rather than only the light traveling directly from each subpixel on the display 250 toward the camera 220. One advantage of this arrangement is that the display 250 will be corrected to optimize viewing over a wider angular range. The diffuser 246 is an optional component that may not be included in some embodiments.
The interface 230 that is operably coupled to the test station 240 is configured to manage the data that is collected, stored, and used for calculation of new correction factors that will be used to recalibrate the display 250. The interface 230 automates the operation of the pattern generator 210 and the test station 240 and writes all the data into a database. In one embodiment, the interface 230 can be a personal computer with software for pattern selection, camera control, image data acquisition, and image data analysis. Optionally, in other embodiments various devices capable of operating the software can be used, such as handheld computers.
According to another aspect of the illustrated embodiment, the imaging device 120 can also include a lens 322. In one embodiment, for example, the lens 322 can be a reflecting telescope that is operably coupled to the camera 320 to provide sufficiently high resolution for long distance imaging of the display 150. In other embodiments, however, the lens 322 can include other suitable configurations for viewing and/or capturing display information from the display 150. Suitable imaging devices 320 and lenses 322 are disclosed in U.S. Pat. Nos. 7,907,154 and 7,911,485, both of which are incorporated herein by reference in their entireties.
The imaging device 120 can accordingly be positioned at a distance L from the display 150. The distance L can vary depending on the size of the display 150, and can include relatively large distances. In one embodiment, for example, the imaging device 120 can be positioned at a distance L that is generally similar to a typical viewing distance of the display 150. In a sports stadium, for example, the imaging device 120 can be positioned in a seating area facing toward the display 150. In other embodiments, however, the distance L can be less that a typical viewing distance and direction, and the imaging system 120 can be configured to account for any viewing distance and/or direction differences. In some embodiments, the imaging device 120 has a wide field of view and the distance L can be less than the width of the display 150 (e.g., approximately one meter for a typical HDTV display). In other embodiments, the imaging device 120 has a long-focus lens 322 (e.g., a telephoto lens) and the distance L can be significantly greater than the width of the display 150 (e.g., between approximately 100 and 300 meters for an outdoor billboard or video screen). In yet other embodiments, the distance L can have other values.
The computing device 130 is configured to cause the pattern generator 110 to send images 160 (e.g., pixel or subpixel patterns) to the display 150. In various embodiments, the pattern generator 110 is standalone hardware test equipment, a logic analyzer add-on module, a computer peripheral operably coupled to the computing device 130, or software in the computing device 130 or in a controller connected to the display 150. In other embodiments, the pattern generator 110 operates independently of the computing device 130. In alternative embodiments, the patterns 160 are provided to the display 150 via standard video signal input, e.g., using a DVI, HDMI, or SDI input to the display. The patterns 160 generated by the pattern generator 110 for displaying on the electronic visual display 150 are discussed in greater detail in connection with
Continuing with respect to
In some embodiments, the memory 332 includes software to control the imaging device 120 as well as measurement software to identify portions of the display 150 (e.g., subpixels of the display 150) and to image or otherwise extract the display data (e.g., subpixel brightness data, pixel color data, etc.). One example of suitable software for controlling the imaging device 120 and/or acquiring the display data is VisionCAL™ screen correction software, which is commercially available from the assignee of the present disclosure, Radiant Zemax, LLC, of Redmond, Wash. In other embodiments, other suitable software can be implemented with the system 100. Moreover, the memory 332 can also store one or more databases used to store the display data from the patterns 160 shown on display 150, as well as calculated correction factors for the display data. In one embodiment, for example, the database is a Microsoft Access® database designed by the assignee of the present disclosure. In other embodiments, however, the display data is stored in other types of databases or data files.
In addition to the color level of each subpixel 432, the luminance level of each subpixel 432 can vary. Accordingly, the additive primary colors represented by a red subpixel, a green subpixel, and a blue subpixel can be selectively combined to produce the colors within the color gamut defined by a color gamut triangle, as shown in
In addition, the measurement process described herein may be performed at various brightness levels. For example, in some embodiments, each pixel 430 or subpixel 432 is measured at input levels (using values from 0 to 255) of 255 (full brightness), 128 (one half brightness), 64 (one quarter brightness), and 32 (one eighth brightness). Data from such measurements can be used in calibration to achieve the same chromaticity for a particular color at various input brightness levels, or, e.g., to improve the uniformity of color and luminance response curves for each pixel or subpixel.
Returning to
The technology displays a series of patterns to illuminate and measure each pixel or subpixel of the display at least once (and potentially multiple times, e.g., at different brightness input levels).
In alternative embodiments, the patterns 460 illuminate individual subpixels 432 (e.g., one or more at a time of subpixels 432a-432c) rather than whole pixels 430. In various embodiments, the patterns 460 are displayed and measured at more than one brightness level. Separately illuminating each subpixel 432 and measuring individual pixels or subpixels at different brightness levels correspondingly multiplies the number of required measurements. In some embodiments, patterns are tailored to a particular display or a particular measurement. The patterns are not necessarily as regular or evenly distributed as the examples illustrated in
In addition to color and/or luminance, the subpixels 432 may have other visual properties that can be measured and analyzed in accordance with embodiments of the present disclosure. Moreover, although the displayed patterns 460 are described above with reference to pixels 430 and subpixels 432, other embodiments of the disclosure can be used with displays having different types of light emitting elements or components.
By way of example, in one embodiment the imaging device has a pixel resolution of 3,072×2,048=6,291,456 pixels. According to the heuristic that fifty pixels of resolution from the imaging device correspond to one subpixel on the display, the imaging device can capture data from 125,829 subpixels on the display (6,291,456 camera pixels/50 camera pixels per display subpixel) in a single captured image. In other embodiments, the correlation between the resolution of the imaging device and the display can vary between, e.g., 6 to 200 pixels on the imaging device corresponding to one subpixel on the display. Assuming, for example, that no other characteristic of the imaging device or its relationship to the display restricts its ability to measure the display, then the technology can determine the appropriate fraction 1/n in this case by dividing 125,829 (the number of subpixels to be illuminated in each captured image) by the total number of subpixels in the display. For example, to measure a display having a pixel resolution of 1,280×720=921,600 pixels, the fraction 1/n would be 125,829/921,600=1/7.324 or (rounding the denominator up) ⅛. In other words, if ⅛ of the display's subpixels are illuminated, the total number of illuminated subpixels will be below the threshold of 125,829 subpixels that can be captured in a single image by the selected imaging device in accordance with the applicable heuristic.
At block 620, the technology displays a pattern selectively illuminating 1/n of the pixels or subpixels of the display (e.g., in the example above, 1 of every 8 subpixels of the display). For example, every nth pixel (or subpixel of a particular color) may be illuminated. An example of such a pattern is described above in connection with
In some embodiments, the computing device compares the color and brightness of each captured pixel with target color and brightness values, e.g., points within the color gamut defined by a color gamut triangle, such as shown in
Determining the correction values can include creating a correction data set or map. In some embodiments, the computing device calculates a three-by-three matrix of values for each pixel that indicate some fractional amount of power to turn on each subpixel to obtain each of the three primary colors (red, green, and blue) at target color and brightness levels. A sample matrix is displayed below:
For example, according to the above matrix for a particular brightness level, when a pixel of the display should be red, the technology has calculated that the display should turn on its red subpixel at 60% power, its green subpixel at 10% power, and its blue subpixel at 5% power.
The determination of the correction values is based, at least in part, on the comparison between the captured and analyzed values and the target values for the display. More specifically, each correction factor can compensate for the difference between the captured and analyzed values and the corresponding target display value. For example, if the captured and analyzed value is less bright than the corresponding target display value, the correction factor can include the amount of brightness that would be required for the captured and analyzed value of the pixel or subpixel to be generally equal to the target display value. Moreover, the correction factor can correlate to the corresponding type of display value. For example, the correction value can be expressed in terms of color or brightness correction values, or in terms of other visual display property correction values. Suitable methods and systems for determining correction values or correction factors are disclosed in U.S. Pat. Nos. 7,907,154 and 7,911,485 referenced above.
At block 660, the process branches depending on whether or not all the pixels or subpixels of the display have been illuminated, captured, analyzed, and corrected as described above in blocks 620-650. If the technology illuminates a fraction 1/n of the pixels or subpixels of the display in each pattern, then at least n iterations are required to measure and calibrate the entire display. For example, after displaying a first pattern such as the pattern described above in connection with
After n iterations have been completed, at block 670, the method 600 can further include sending the calibration correction values to the display. In some embodiments, the correction factors are stored in firmware within the display or a controller of the display. In some embodiments, the correction factor data set or map can be saved and, e.g., provided to a third party such as the owner of the display, or used to process video images outside the display such that the display can show the processed image according to desired or target display properties without calibrating or adjusting the display itself. Suitable methods and systems for correcting images to calibrate their appearance on a particular display are disclosed in U.S. patent application Ser. No. 12/772,916, filed May 3, 2010, entitled “Methods and systems for correcting the appearance of images displayed on an electronic visual display,” which is incorporated herein in its entirety by reference. In some embodiments, the technology verifies or improves the calibration by measuring the calibrated output of each pixel or subpixel as described in blocks 610-660 above, and optionally modifying the correction factors applied to the display.
From the foregoing, it will be appreciated that specific embodiments of the disclosure have been described herein for purposes of illustration, but that various modifications may be made without deviating from the spirit and scope of the various embodiments of the disclosure. Further, while various advantages associated with certain embodiments of the disclosure have been described above in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the disclosure. Accordingly, the disclosure is not limited, except as by the appended claims.
Claims
1. A method in a computing system having a pattern generator and an image capture device for calibrating a visual display comprising an array of a number of pixels and corresponding subpixels, the method comprising:
- identifying a fraction of the number of pixels and corresponding subpixels of the display;
- generating, by the pattern generator, patterns for illuminating proper subsets of the pixels and corresponding subpixels of the display, such that— each pattern illuminates the identified fraction of the number of pixels and corresponding subpixels of the display, and each of the pixels and corresponding subpixels of the display is illuminated in at least one pattern; and
- for each generated pattern— illuminating subpixels of the display according to the generated pattern; capturing, by the image capture device, information about the illuminated subpixels; analyzing, by the computing system, the captured information about the illuminated subpixels; calculating correction factors for illuminated pixels and corresponding subpixels.
2. The method of claim 1, further comprising using the correction factors to calibrate the visual display.
3. The method of claim 2 wherein using the correction factors to calibrate the visual display comprises uploading the correction factors to firmware or software controlling the display.
4. The method of claim 2 wherein using the correction factors to calibrate the visual display comprises applying the correction factors to process an image to be shown on the display.
5. The method of claim 4, further comprising applying the correction factors to process substantially every image to be shown on the display.
6. The method of claim 1 wherein the subpixels are light-emitting diodes.
7. The method of claim 1 wherein the subpixels are organic light-emitting diodes.
8. The method of claim 1 wherein identifying a fraction of the number of pixels and corresponding subpixels of the display comprises receiving input specifying the fraction.
9. The method of claim 1 wherein identifying a fraction of the number of pixels and corresponding subpixels of the display comprises:
- determining characteristics of the display and of measurement equipment for capturing information about the illuminated subpixels; and
- calculating the fraction based on the determined characteristics.
10. The method of claim 1 wherein the fraction is ¼ or smaller.
11. The method of claim 1 wherein a pattern comprises a regular grid of nonadjacent illuminated pixels.
12. The method of claim 1 wherein each pattern comprises a distinct set of nonadjacent illuminated pixels.
13. The method of claim 1 wherein a pattern comprises illuminated subpixels that are substantially evenly distributed across the display.
14. The method of claim 1, further comprising, for each generated pattern, illuminating subpixels of the display according to the generated pattern at more than one brightness level.
15. The method of claim 14 wherein the brightness levels comprise full brightness, one-half brightness, one-quarter brightness, and one-eighth brightness.
16. The method of claim 1 wherein capturing information about the illuminated subpixels comprises measuring the illuminated subpixels using an imaging colorimeter.
17. The method of claim 1 wherein analyzing the captured information about the illuminated subpixels comprises:
- locating and registering illuminated subpixels of the display; and
- determining a chromaticity value and a luminance value for each registered subpixel.
18. The method of claim 17 wherein calculating correction factors for illuminated pixels and corresponding subpixels comprises:
- converting the chromaticity and luminance value for each registered subpixel value to measured tristimulus values;
- converting a target chromaticity value and a target luminance value for a given color to target tristimulus values; and
- calculating correction factors for each registered subpixel based on a difference between the measured tristimulus values and the target tristimulus values.
19. The method of claim 18 wherein correction factors for each registered subpixel comprise a three by three matrix of values that indicate fractional amounts of power to turn on each registered subpixel for a given color and brightness level.
20. The method of claim 1 wherein the illuminating and capturing are performed in a testing station configured to block out or inhibit ambient light.
21. An apparatus for measuring and calibrating a visual display having pixels and corresponding subpixels, the apparatus comprising:
- a pattern generator operably coupled to the display, wherein the pattern generator is configured to illuminate a proper subset of the pixels and corresponding subpixels of the display;
- an imaging device configured to capture information about pixels and corresponding subpixels of the display illuminated by the pattern generator; and
- a computing device operably coupled to the pattern generator and to the imaging device, wherein the computing device comprises a processor and a computer-readable medium having instructions stored thereon that, when executed by the processor— cause the pattern generator to illuminate a proper subset of the pixels and corresponding subpixels of the display; cause the imaging device to capture information about the illuminated proper subset of the pixels and corresponding subpixels of the display; analyze the captured information about the illuminated subpixels; and calculate correction factors for the illuminated subpixels.
22. The apparatus of claim 21 wherein the pattern generator comprises standalone test equipment.
23. The apparatus of claim 21 wherein the pattern generator comprises software in the computing device, such that the computing device is operably coupled to the display and configured to transmit patterns to the display.
24. The apparatus of claim 21, further comprising a testing station configured to receive at least a portion of the display being measured and calibrated and block ambient light to the display during processing.
25-31. (canceled)
Type: Application
Filed: Mar 14, 2013
Publication Date: Sep 18, 2014
Inventor: Ronald F. Rykowski (Bellevue, WA)
Application Number: 13/830,678
International Classification: H04N 17/00 (20060101);