Method and apparatus for color manipulation

Recoloring an object may be performed by calculating an offset in the hue, saturation and luminance between the source and destination colors. The source image is then adjusted using these offsets to produce the desired color.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD OF THE INVENTION

[0001] The present invention is related to methods and apparatus of recoloring an object in an image.

BACKGROUND

[0002] A common manipulation of images including both still images and in motion video, is to recolor an object. The process of recoloring an object in an image generally involves selecting a portion of an image to be recolored, and then applying adjustments to the selected portion, either manually or by some function. A significant difficulty in recoloring an object is ensuring that the resultant coloration looks natural.

[0003] As used herein, a “color space” is a multidimensional mathematical coordinate system within which colors are defined. A “color” is a single point within a color space. Usually, a color space has three dimensions and a color is therefore defined by a triplet of values. However, color spaces, and hence color representations, sometimes have higher order dimensionality. Moreover, the dimensions of a color space can be non-linear and can be expressed in various units, including angles, magnitudes, etc.

[0004] Some popular color spaces are known as “HSL”, “RGB”, “CMYK” and “YCRCB”. The names of these color spaces make direct reference to the dimensions in which colors are represented. For example, HSL color space has dimensions of hue (H), luminance (L) and saturation (S); while YCRCB color space has dimensions of luminance (Y) and two dimensions of chrominance (CR and CB). The most common color space in which computers represent source images, RGB, has dimensions of red (R), green (G) and blue (B).

[0005] Producers of still pictures and motion video programs use a variety of special effects to produce a final product. A graphics editor performs the task of adding special effects to still pictures and to motion video segments using a graphics workstation.

[0006] Recoloring is a special effect that involves changing the color of certain pixels within one or more video image frames. One application of recoloring involves modifying the color of an object to make it more or less noticeable in the video image frame. Another application of recoloring is to repair a damaged portion of the video image frame. A third application of recoloring is to add color to a video image frame to generate the appearance of one or more new objects in the video image frame.

[0007] Using one technique to achieve a recolor effect, the graphics editor, using a mouse, graphics tablet or similar input device, manually, or with the aid of object tracing software, circumscribes an area of a video image frame that is to receive the recolor effect using the graphics workstation. Then, the graphics editor changes the color of each pixel within the circumscribed area to a specified destination color. The tracing work, whether manual or aided by software, is painstaking and tedious to do accurately.

[0008] Since every pixel within the circumscribed area is changed to the new destination color, the graphics editor must circumscribe the intended recolor area with precision. If the graphics editor does a poor job of circumscribing the area, some pixels that were not intended to be changed may be changed, and some pixels intended to be changed may be left unchanged.

[0009] Producing recolor effects using this first conventional technique has some drawbacks. In particular, setting each pixel of an object to be a destination color generally makes the object look flat and two-dimensional rather than three-dimensional which is often preferred. Additionally, it is not practical for a graphics editor to perform this process on motion video footage that may include a sequence of hundreds or even thousands of individual frames. Furthermore, conventional color correction algorithms performed on computers are complex and require substantial computer power.

[0010] In a related, conventional technique, each pixel in the region circumscribed by the graphics editor is first converted to HSL color space. The technique then replaces the hue value of each selected pixel with a new value equal to the hue of the destination color. Each pixel is then converted back to the color space in which the pixel was originally represented.

[0011] This second technique also gives an unnatural appearing result. The recolored portion of the image does not retain the variations in hue or luminance originally present.

[0012] In yet another conventional technique, each pixel in the region circumscribed is first converted to YCRCB color space. The values of CR and CB of each selected pixel are replaced with new values corresponding to those of the destination color. Again, each pixel is then converted back to the color space in which the pixel was originally represented.

[0013] This third conventional technique also produces an unnatural result. In this technique, replacing a light color with a dark color, or vice versa, exacerbates the unnatural result. For example, if the color being replaced is light, say yellow, and the color being inserted is dark, say red, the result is likely to be an unexpected and undesired pink.

[0014] None of the conventional techniques discussed above produces a natural recoloring result which retains all of the subtlety and character of an original object in a recolored object.

SUMMARY

[0015] Recoloring an object may be performed by calculating an offset in the hue, saturation and luminance between the source and destination colors. The source image is then adjusted using these offsets to produce the desired color.

[0016] Accordingly, in one aspect, a method for adjusting the color of a portion of an image involves receiving an indication of a source color and a target color. A difference between the source color and target color are computed to identify offsets in the values of the color components. Pixels identified in the image as matching the source color are adjusted using these offsets to produce a color adjusted image.

[0017] In another aspect, an apparatus for manipulating color of a portion of an image includes a difference calculator for determining offsets in the components of the source color and a target color. A color adjustment module receives indications of the pixels to be modified and modifies the pixels of the selected portion of the image according to the determined offsets to provide a color adjusted image.

[0018] In accordance with yet other aspects of the invention, there are recolored images and entertainment products incorporating such recolored images. The recolored images and entertainment products of these aspects involve a method including receiving an indication of a source color and a target color. A difference between the source color and target color are computed to identify offsets in the values of the color components. Pixels identified in the image as matching the source color are adjusted using these offsets to produce a color adjusted image.

BRIEF DESCRIPTION OF THE DRAWING

[0019] In the drawing,

[0020] FIG. 1 is a block diagram showing the general structure of a color manipulation system in one embodiment;

[0021] FIG. 2 is a more detailed block diagram of the system of FIG. 1; and

[0022] FIG. 3 is a flow chart describing the operation of the system of FIG. 2.

DETAILED DESCRIPTION

[0023] The present invention will be more completely understood through the following detailed description which should be read in conjunction with the attached drawing in which similar reference numbers indicate similar structures. All references cited herein are hereby expressly incorporated by reference.

[0024] A computer system for implementing the system of FIGS. 1 and 2 as a computer program typically includes a main unit connected to both an output device which displays information to a user and an input device which receives input from a user. The main unit generally includes a processor connected to a memory system via an interconnection mechanism. The input device and output device also are connected to the processor and memory system via the interconnection mechanism.

[0025] It should be understood that one or more output devices may be connected to the computer system. Example output devices include a cathode ray tube (CRT) display, liquid crystal displays (LCD), printers, communication devices such as a modem, and audio output. It should also be understood that one or more input devices may be connected to the computer system. Example input devices include a keyboard, keypad, track ball, mouse, pen and tablet, communication device, and data input devices such as sensors. It should be understood the invention is not limited to the particular input or output devices used in combination with the computer system or to those described herein.

[0026] The computer system may be a general purpose computer system which is programmable using a computer programming language, such as “C++,” JAVA or other language, such as a scripting language or even assembly language. The computer system may also be specially programmed, special purpose hardware. In a general purpose computer system, the processor is typically a commercially available processor, of which the series x86 and Pentium processors, available from Intel, and similar devices from AMD and Cyrix, the 680X0 series microprocessors available from Motorola, the PowerPC microprocessor from IBM and the Alpha-series processors from Digital Equipment Corporation, are examples. Many other processors are available. Such a microprocessor executes a program called an operating system, of which WindowsNT, UNIX, DOS, VMS and OS8 are examples, which controls the execution of other computer programs and provides scheduling, debugging, input/output control, accounting, compilation, storage assignment, data management and memory management, and communication control and related services. The processor and operating system define a computer platform for which application programs in high-level programming languages are written.

[0027] A memory system typically includes a computer readable and writeable nonvolatile recording medium, of which a magnetic disk, a flash memory and tape are examples. The disk may be removable, known as a floppy disk, or permanent, known as a hard drive. A disk has a number of tracks in which signals are stored, typically in binary form, i.e., a form interpreted as a sequence of one and zeros. Such signals may define an application program to be executed by the microprocessor, or information stored on the disk to be processed by the application program. Typically, in operation, the processor causes data to be read from the nonvolatile recording medium into an integrated circuit memory element, which is typically a volatile, random access memory such as a dynamic random access memory (DRAM) or static memory (SRAM). The integrated circuit memory element allows for faster access to the information by the processor than does the disk. The processor generally manipulates the data within the integrated circuit memory and then copies the data to the disk when processing is completed. A variety of mechanisms are known for managing data movement between the disk and the integrated circuit memory element, and the invention is not limited thereto. It should also be understood that the invention is not limited to a particular memory system.

[0028] It should be understood the invention is not limited to a particular computer platform, particular processor, or particular high-level programming language. Additionally, the computer system may be a multiprocessor computer system or may include multiple computers connected over a computer network. It should be understood that each module 22, 24 and 26 may be separate modules of a computer program, or may be separate computer programs. Such modules may be operable on separate computers.

[0029] Referring now to FIG. 1, a color changing module 10 may be embodied in a computer program or special purpose hardware to manipulate an input image 12 by changing one or more colors in the image 12. Application of the process performed by the color changing module 10 may be limited to a selected area 14 of the source image 12. Thus, the pixels operated upon by the color changing module 10 are bounded both in physical space, i.e., by their position in the input image, and in color space, i.e., by the position of their color values in color space. The selected area 14 of the source image 12 may be defined using several known techniques, including those such as shown in U.S. patent application Ser. No. 08/832,862 and U.S. patent application Ser. No. 08/821,336 and as shown in Mortensen et al., “Intelligent Scissors for Image Composition”, Computer Graphics Proceedings, Annual Conference Series, 1995, pp. 191-198.

[0030] The color changing module 10 also receives as inputs an indication of a source color 16 and an indication of a target color 18. The indication of the source color and the indication of the target color should be made in, or connected to, the same color space, e.g., HSL space, using values defined with the same precision. As discussed above, the dimensions of HSL color space are hue, saturation, and luminance. Other suitable color spaces and their dimensions include RGB: red, green and blue; CMYK: cyan, magenta, yellow and black; and YCRCB: luminance and chrominance. If the color space or precision of all the inputs to the color changing module 10 are not the same, they should first be converted to a convenient color space and least common precision, using known techniques. The color changing module 10 determines the offset between the source color 16 and target color 18. Pixels in the selected area of the input image 12 whose position in the color space is sufficiently close to that of the source color 16 are considered to match the source color 16 and are adjusted according to the difference between the source and target colors. A “source color cloud” is defined as all those points in the color space which are sufficiently close to the source color to be considered a match for the source color. Techniques for obtaining a source color cloud from a source color are shown in U.S. patent application Ser. No. 08/832,862. In the illustrated embodiment, the match is determined by separately measuring the distance in each dimension of the color space from the pixel color to the source color 16. The color changing module 10 then produces a color manipulated image, as indicated at 20.

[0031] The system of FIG. 1, as discussed above, may be implemented as a computer program to be used as part of or in conjunction with other image painting programs, video special effects equipment, and non-linear video editors.

[0032] Referring now to FIG. 2, the system of FIG. 1 will be described in more detail.

[0033] The color matching module 10 includes a difference calculator 24 which determines the difference in values of the components defining the source color 16 and target color 18. Where the source and target colors are represented in HSL space, using hue, saturation and luminance values, as in this exemplary embodiment, the difference calculator 24 outputs hue, saturation and luminance offset values 28. The difference calculator 24 of embodiments receiving inputs represented in other color representation spaces should be made to match the input color representation space. It should be recognized by the skilled artisan that following references herein to HSL space and its dimensions are given by way of example only, as the described techniques can be altered to use any suitable color space in which a particular embodiment is made to operate. A pixel identifying module 22 determines pixels in the input image 12 and within the selected area 14 of the input image which match the source color 16, i.e., whose color is within the source color cloud. Closeness of the match producing the source color cloud is determined by thresholds set by the skilled artisan or operator to obtain a desired result. The output of the pixel identifying module provides the coordinates within the image, and possibly the color, of the identified pixels as indicated at 30. The operation of the color difference calculator 24 will be described in more detail below.

[0034] A color adjustment module 26 receives the hue, saturation and luminance offsets 28, the specified coordinates 30 and the input image 12. The color adjustment module 26 adjusts the specified pixels in the image 12 to produce the color adjusted image 20. In particular, the offsets in hue and offsets in luminance are used to compute adjustment terms which are then used to translate the hue and luminance color space values of the specified pixels to new hue and luminance color space values. Saturation of the specified pixels is changed according to a method which depends on the direction in which the saturation is to be adjusted. For example, if a highly saturated color needs to be unsaturated, the saturation value is scaled down using a multiplicative scale factor. If a less saturated color is to become more saturated, the saturation values are translated up using an additive term. This apparently discontinuous handling of saturation avoids generating invalid saturation values when scaling down, while producing natural results when scaling up. Moreover, for small changes in saturation, the above described adjustment in either direction becomes similar, producing a continuous function. The adjustment of hue, saturation and luminance color space values is described in more detail, below.

[0035] The operation of the system of FIG. 2 will now be described in connection with a flow chart shown in FIG. 3. The color changing module 10 receives in step 40 the source color and target color. In step 42, the hue, saturation and luminance offsets are computed. The input image to be modified and the selected portion of the image are received in step 44. The order of steps 40 through 44 is immaterial. After the source color and input image are received, the source color is matched to colors of the pixels in the received input image to identify those pixels to be modified. This matching step may be limited by a specified shape or selected portion of the input image. Given the computed hue, saturation and luminance offsets from step 42, the pixels identified in step 46 are adjusted in step 48. Step 46 of matching is performed as described in U.S. patent application Ser. No. 08/832,862, mentioned above. The step 48 of adjusting the hue, saturation and luminance of identified pixels will be described in more detail, below.

[0036] Many systems in common use, particularly computer-based systems represent colors in RGB space. However, some computations used in this exemplary embodiment of the invention are more simply expressed in HSL space, while YUV space serves as a useful intermediary. In the following discussion, it is assumed that input colors are represented in RGB space.

[0037] First, the color difference calculator (FIG. 2, 24) computes the offsets (FIG. 2, 28) as follows. (See also FIG. 3, step 42) The components of the source color 16 are expressed as sRed, sGreen and sBlue in RGB space. Also in RGB space, the components of the target color 18 are expressed as tRed, tGreen and tBlue. The adjustment terms to be calculated are lumAdjust, uAdjust, vAdjust, cm0, cm1, cm2 and cm3.

[0038] First, the source color 16 is converted to YUV space by a conventional RGB to YUV transform.

sY, sU, sV=rgb_to—yuv(sRed, sGreen, sBlue).  (1)

[0039] Then the target color 18 is also converted to YUV space by the same conventional transform:

tY, tU, tV=rgb_to—yuv(tRed, tGreen, tBlue).  (2)

[0040] The hue and saturation of the source color 16 is then computed from sU and sV. If either sU or sV is non-zero, then:

sHue=a tan 2(sU, sV)

sSat=sqrt(sU*sU+sV*sV),  (3)

[0041] otherwise, if both sU and sV are zero, then:

sHue=0.0

sSat=epsilon,  (4)

[0042] where epsilon is a small number used simply to prevent “divide-by-zero” errors. Next, the hue and saturation of the target color 18 is computed from tU and tV. If either tU or tV is non-zero, then:

tHue=a tan 2(tU, tV)

tSat=sqrt(tU*tU+tV*tV),  (5)

[0043] otherwise, if both tU and tV are zero, then:

tHue=0.0

tSat=epsilon.  (6)

[0044] The hue and luminance offsets are simple differences.

hueAdjust=tHue−sHue

lumAdjust=tY−sY  (7)

[0045] The saturation offset is computed and applied differently depending on whether saturation is to be increased or decreased. If sSat>tSat, then a multiplicative scale factor is computed to scale the saturation down.

satAdjust=tSat/sSat

uAdjust=0  (8)

vAdjust=0

[0046] Otherwise, an additive translation term is computed to move the saturation up.

satAdjust=1.0

uAdjust=sin(tHue)*(tSat−sSat)  (9)

vAdjust=cos(tHue)*(tSat−sSat)

[0047] The hue scalars are simple trigonometric functions of the hue offset.

cos Theta=cos(hueAdjust)

sin Theta=sin(hueAdjust)  (10)

[0048] They are then adjusted for saturation.

cos Theta=cos Theta*satAdjust

sin Theta=sin Theta*satAdjust  (11)

[0049] Finally, the color matrix values are assigned, as follows:

cm0=cos Theta

cm1=sin Theta

cm2=−sin Theta

cm3=cos Theta  (12)

[0050] The input image 12 is then processed, adjusting the pixels within a predetermined source cloud (See FIG. 3, step 48). For every selected pixel in the input image within the source cloud, convert the pixel from RGB space to YUV space, modify the color coordinate values, convert the color representation back to RGB and assign it to a corresponding pixel in output image (FIG. 2, 20).

[0051] The components of the input pixel color are inRed, inGreen and inBlue. The adjustment terms, previously computed as described above, are lumAdjust, uAdjust, vAdjust, cm0, cm1, cm2 and cm3. The components of the output pixel are outRed, outGreen and outBlue.

[0052] The RGB representation of the color of the input pixel is first converted to YUV space, conventionally, as described above.

inY, inU, inV=rgb_to—yuv(inRed, inGreen, inBlue)  (13)

[0053] The YUV values of the input pixel are then modified.

outY=inY+lumAdjust

outU=cm0*inU+cm1*inV+uAdjust  (14)

outV=cm2*inU+cm3*inV+vAdjust

[0054] The modification of Equations (14) can produce values outside a valid range for Y, U or V. Therefore, output values are appropriately limited.

outY=ccir_limit(outY)

outU=ccir_limit(outU)  (15)

outV=ccir_limit(outV)

[0055] Finally, the color representation of the output pixel is converted back to RGB, conventionally, in preparation for storing it in the color adjusted image 20.

outRed, outGreen, outBlue=yuv_to—rgb(outY, outU, outV)  (16)

[0056] By performing the manipulation on color of a group of pixels whose color lies within a source color cloud based on the offsets between the source and destination colors, a more natural appearance may be obtained in the resulting recolored image than achievable using conventional techniques. As has been discovered by this inventor, conventional techniques, such as mentioned in the BACKGROUND, effectively collapse a source color cloud having three dimensions, i.e., the group of input pixels, into an output color defined in only zero, one or two dimensions. This results in the characteristically flat or unnatural appearance of the output of conventional recoloring systems. In contrast, techniques embodying the present invention scale and/or translate the entire source cloud into a corresponding three-dimensional destination cloud whose constituent color value points are related in a natural way to corresponding points in the source cloud.

[0057] This technique is applicable in other color spaces by converting color representations into YUV and HSL, as described above, or by redefining the above-described transformations in another color space.

[0058] Once having applied the above described technique to an image or sequence of images, the image or images are readily incorporated into entertainment products in which images, sequences of images and full motion video are common. For example, images recolored as described above may be incorporated using conventional technology into commercial movies, commercial videos, computerized multimedia products, computer games, and the like.

[0059] Having now described a few embodiments of the invention, it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting, having been presented by way of example only. Numerous modifications and other embodiments are within the scope of one of ordinary skill in the art and are contemplated as falling within the scope of the invention as defined by the appended claims and equivalents thereto.

Claims

1. A method for adjusting color of a portion of an input image, including:

receiving an indication of a source color and a destination color;
computing the difference between the source color and destination color to identify offsets between sets of values defining the source color and the destination color;
identifying pixels in the portion of the input image as similar to the source color; and
adjusting the identified pixels using the offsets computed, to produce a color adjusted image.

2. The method of claim 1, the step of adjusting further comprising:

computing adjustment values from the offsets, the adjustment values and a set of values defining color for the pixels in the input image being combined to produce color adjusted output pixels.

3. The method of claim 2, the step of adjusting further comprising:

adding an adjustment value to a value from the set defining color for a pixel in the input image.

4. The method of claim 3, the step of adjusting further comprising:

multiplying an adjustment value by a value from the set defining color for a pixel in the input image.

5. An apparatus for manipulating color of a portion of an image, including:

a difference calculator for determining offsets in the components of the source color and a destination color; and
a color adjustment module receives indications of the pixels to be modified and modifies the pixels of the selected portion of the image according to the determined offsets to provide a color adjusted image.

6. A recolored image produced by a method including:

receiving an indication of a source color and a destination color;
computing the difference between the source color and destination color to identify offsets between sets of values defining the source color and the destination color;
identifying pixels in the portion of the input image as similar to the source color; and
adjusting the identified pixels using the offsets computed, to produce a color adjusted image.

7. The image of claim 6, wherein the step of adjusting further comprises:

computing adjustment values from the offsets, the adjustment values and a set of values defining color for the pixels in the input image being combined to produce color adjusted output pixels.

8. The image of claim 7, the step of adjusting further comprising:

adding an adjustment value to a value from the set defining color for a pixel in the input image.

9. The image of claim 8, the step of adjusting further comprising:

multiplying an adjustment value by a value from the set defining color for a pixel in the input image.

10. An entertainment product including a recolored image produced by a method including:

receiving an indication of a source color and a destination color;
computing the difference between the source color and destination color to identify offsets between sets of values defining the source color and the destination color;
identifying pixels in the portion of the input image as similar to the source color; and
adjusting the identified pixels using the offsets computed, to produce a color adjusted image.

11. The entertainment product of claim 10, wherein the step of adjusting further comprises:

computing adjustment values from the offsets, the adjustment values and a set of values defining color for the pixels in the input image being combined to produce color adjusted output pixels.

12. The entertainment product of claim 11, the step of adjusting further comprising:

adding an adjustment value to a value from the set defining color for a pixel in the input image.

13. The entertainment product of claim 12, the step of adjusting further comprising:

multiplying an adjustment value by a value from the set defining color for a pixel in the input image.
Patent History
Publication number: 20020041709
Type: Application
Filed: Dec 11, 2001
Publication Date: Apr 11, 2002
Inventor: Robert Gonsalves (Wellesly, MA)
Application Number: 10015027
Classifications
Current U.S. Class: Color Correction (382/167)
International Classification: G06K009/00;