System, method, and computer program product for real time transparency-based compositing

A system, method, and computer program product for compositing rendered image data in near real time. The input pixel streams that constitute the rendered image data can be video streams, for example. Each input pixel stream can originate from its own graphics processing unit. Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, plus an alpha value. Compositing is performed by an image combiner implemented in either hardware or software. The image combiner accepts two or more input pixel streams, and performs compositing on corresponding pixels from each input pixel stream. The compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values. The compositing process also uses depth information that defines whether a given pixel is in the foreground or in the background relative to another corresponding pixel. The result of the compositing process is a resultant pixel stream based on corresponding pixels of each input pixel stream.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application is a continuation-in-part of U.S. patent application Ser. No. 09/888,438, filed Jun. 26, 2001, which claims priority to U.S. Provisional Application No. 60/219,006, filed Jul. 18, 2000. U.S. patent application Ser. Nos. 09/888,438 and 60/219,006 are both incorporated herein by reference in their entireties.

STATEMENT REGARDING FEDERALLY-SPONSORED RESEARCH AND DEVELOPMENT

[0002] Not applicable.

REFERENCE TO MICROFICHE APPENDIX/SEQUENCE LISTING/TABLE/COMPUTER PROGRAM LISTING APPENDIX (submitted on a compact disc and an incorporation-by-reference of the material on the compact disc)

[0003] Not applicable.

BACKGROUND OF THE INVENTION

[0004] 1. Field of the Invention

[0005] The invention described herein relates to computer graphics, and more particularly to compositing of images.

[0006] 2. Background Art

[0007] A common problem in computer graphics is the efficient compositing of two or more rendered images to produce a single image. The compositing process typically involves the combination of the images, pixel by pixel, and takes into account the respective color coordinates of each pixel. The process can also take into account opacity, intensity, and the relative distances of the images from a viewer.

[0008] One way in which compositing has been accomplished in the past is through software-based readback from frame buffers. In such arrangements, two or more graphics processors each send frames of rendered graphics data to their respective frame buffers. The contents of each frame buffer are then read back, into a compositor module. The compositor module can be a compositing frame buffer or a graphics host. At the compositor module, software-based compositing is performed on corresponding pixels, one pixel from each frame buffer. A pixel from one frame buffer is composited with a pixel from the second frame buffer. This continues until all appropriate pixels in each frame buffer have been composited. A final output, comprising the resultant composited pixels, is then available from the compositor module.

[0009] In some applications such a system can be adequate. In applications requiring faster compositing, however, such a system may not be fast enough. In video applications, for example, where the final output must be produced in real time, such compositing entails unacceptable delay.

[0010] Hence there is a need for a system and method for fast compositing, where the compositing can take place at near real time rates.

BRIEF SUMMARY OF THE INVENTION

[0011] The invention described herein is a system, method, and computer program product for compositing rendered image data in real time or near real time. The input pixel streams that constitute the rendered image data can be video streams, for example. Each input pixel stream can originate from its own graphics processing unit. Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, and can also include an alpha value. Compositing is performed by an image combiner that can be implemented in either hardware or software. The image combiner accepts two or more input pixel streams, and performs compositing on corresponding pixels from each input pixel stream. The compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values or intensity values. The compositing process also uses depth information that defines whether a given pixel is in the foreground or in the background relative to another corresponding pixel. At the pixel level, the result of the compositing process is a resultant pixel based on the corresponding pixels of each input pixel stream.

[0012] The foregoing and other features and advantages of the invention will be apparent from the following, more particular description of a preferred embodiment of the invention, as illustrated in the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES

[0013] FIG. 1 is a block diagram illustrating a system for real time or near real time combination of graphics images, according to an embodiment of the invention.

[0014] FIG. 2 is a block diagram illustrating an image combiner, according to an embodiment of the invention.

[0015] FIG. 3 is a flowchart illustrating a method for real time or near real time compositing of graphics images, according to an embodiment of the invention.

[0016] FIG. 4 is a flowchart illustrating the compositing process, according to an embodiment of the invention.

[0017] FIG. 5 is a flowchart illustrating a method for real time or near real time compositing of graphics images wherein inputs represent adjacent volumes of a three-dimensional scene, according to an embodiment of the invention.

DETAILED DESCRIPTION OF THE INVENTION I. Overview

[0018] The invention described herein is a system, method, and computer program product for the real time compositing of two or more input pixel streams. The input pixel streams can be video streams, for example. Each input pixel stream can originate from its own graphics processing unit. Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, plus an alpha value that defines the opacity of the pixel. Compositing is performed by an image combiner that can be implemented in either hardware or software. The image combiner accepts the input pixel streams, and performs compositing on corresponding pixels from each input pixel stream. The compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values or the intensity values. The compositing process also uses depth information that defines whether a given pixel stream is in the foreground or in the background relative to another corresponding pixel stream. At the pixel level, the result of the compositing process is a resultant pixel stream based on the corresponding pixels of each input pixel stream.

II. System

[0019] An exemplary context for the invention is illustrated in FIG. 1. Two graphics processors are shown, processors 105 and 110. Graphics processor 105 produces a frame of image data; likewise, graphics processor 110 produces a frame of image data. The image data of graphics processor 105 is then sent, as input 121, to image combiner 130. Similarly, the rendered image data produced by graphics processor 110 is sent as input 122 to combiner 130. Combiner 130 also receives depth information 132. Depth information 132 indicates the depth order of the inputs 121 and 122. The inputs can be understood as images to be combined; depth information 132 indicates which is in the foreground and which is in the background, relative to a viewer. Depth information 132 generally does not change in the context of a given frame.

[0020] Combiner 130 performs compositing on corresponding pixels of inputs 121 and 122 based on the color coordinates of the corresponding pixels, the alpha values of the corresponding pixels, and the depth information 132. The compositing process is described in greater detail in section III below. A resultant pixel stream 135 is then produced by combiner 130. In an embodiment of the invention, the pixels of pixel stream 135 that constitute a frame are stored in a frame buffer 140. In this embodiment, the output of system 100 is a frame of image data, output 145.

[0021] Note that while the embodiment of FIG. 1 shows two graphics processors and two associated inputs, other embodiments of the invention could feature more than two inputs. In such a case, the compositor performs compositing on all the inputs, taking into account the relative depth information of all inputs. Also, inputs 121 and 122 of FIG. 1 are received by combiner 130 from respective graphics processors 105 and 110. In an alternative context, however, inputs to combiner 130 can include the outputs of other image combiners.

[0022] Image combiner 130 is illustrated in greater detail in FIG. 2. Combiner 130 includes a depth determination module 205. Depth determination module 205 receives depth information 132. As described above, depth information 132 indicates the depth order of the images represented by the input pixel streams. Depth determination module 205 coverts this information to a format usable for blending purposes. Output 210 of depth determination module 205 therefore conveys which input pixel stream is “over” another.

[0023] Output 210 is sent to one or more blending modules, shown in FIG. 2 as blending modules 215 through 225. In an embodiment of the invention, each blending module is associated with a specific color coordinate. In the embodiment of FIG. 3, blending modules 215, 220, and 225 are associated with red, green, and blue coordinates, respectively. Each blending module performs blending of a color from corresponding pixels from respective input pixel streams. Hence, blending module 215 blends red coordinates R1 and R2 from corresponding pixels. The alpha values of the corresponding pixels, &agr;1, and &agr;2, are also input to blending module 215, along with output 210 of depth determination module 205. Inputs to blending modules 220 and 225 are analogous.

[0024] The invention can implement any of several well known blending operations. In a preferred embodiment, blending is performed in depth order, taking into account the opacity of input pixels. Here, colors are blended linearly according to alpha values. In one embodiment, a blended red coordinate, for example, has the value &agr;1R1+(1−&agr;1)&agr;2R2, assuming that input 1 is over input 2. The other color coordinates are blended analogously. Alternatively, if compositing is based on maximum intensity, the value of the resultant red coordinate is the maximum of the red coordinates of the corresponding pixels. The other color coordinates would be blended analogously. Compositing is described more fully in Computer Graphics, Principle and Practice, Foley et al., Addison-Wesley, 1990, pp.835-843 (included herein by reference in its entirety).

[0025] The composited color coordinates from blending modules 215 through 225 are then sent to an output module 230 for formatting as resultant pixel 235.

[0026] While the embodiment of FIG. 2 shows the blending of color coordinates in parallel, alternative embodiments can perform blending of color coordinates in serial using a single blending module.

III. Method

[0027] The method of the invention is illustrated in FIG. 3. The method begins at step 305. In steps 310 and 315, two inputs, shown here as inputs 1 and 2 are received at a compositor. In step 320, depth information is received by the compositor. In step 325, the compositing of the two inputs is performed in depth order, so as to take into account the depth information received in step 320. The process concludes at step 335.

[0028] Compositing step 325 is illustrated in greater detail in FIG. 4, according to an embodiment of the invention. The compositing process begins at step 405. In step 410, the depth order of the input pixel streams is determined for a frame. This determination is made based on the received depth information. In step 415, the first color coordinates from corresponding pixels are blended, based on the depth order and the pixels' alpha values. Similarly, in steps 420 and 425, the second and third color coordinates are blended. In step 430, a resultant pixel is output. In step 435, a determination is made as to whether additional pixels are to be blended for the current frame. If so, the process repeats from step 415 with a new set of corresponding pixels. Otherwise, the process ends at step 440.

[0029] As mentioned above, the process of compositing pixels based on transparency is known to persons of ordinary skill in the art, and is documented in Computer Graphics, Principle and Practice, Foley et al., supra. Moreover, while the embodiment of FIG. 4 shows the blending of color coordinates in parallel, alternative embodiments can perform blending of color coordinates in serial.

[0030] A particular embodiment of the method of the invention is shown in FIG. 5. The method begins with step 505. In steps 510 and 515, inputs 1 and 2 are received respectively. In this embodiment however, each input represents a sub-volume of three dimensional space from a scene that has been rendered. The sub-volumes represented by inputs 1 and 2 can therefore be thought of as a back “slab” and a front slab, respectively, where the terms front and back refer to the relative positions of each volume from the current frame perspective of the viewer. In step 520, the depth information is received, as before. In step 525, composition is performed on the slabs in depth order, and the resultant pixel stream is output. The process concludes at step 535.

[0031] Note that while the embodiments of FIGS. 3 through 5 shows two graphics processors and two associated inputs, other embodiments of the invention could feature more than two inputs. In such a case, the compositor performs compositing on all the inputs, taking into account depth information relating to the inputs.

IV. Computing Environment

[0032] Referring to FIG. 2, compositor 130 may be implemented using hardware, software or a combination thereof. In particular, compositor 130 may be implemented using a computer program, and execute on a computer system or other processing system. An example of such a computer system 600 is shown in FIG. 6. The computer system 600 includes one or more processors, such as processor 604. The processor 604 is connected to a communication infrastructure 606 (e.g., a bus or network). Various software embodiments can be described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures.

[0033] Computer system 600 also includes a main memory 608, preferably random access memory (RAM), and may also include a secondary memory 610. The secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage drive 614, representing a magnetic medium drive, an optical disk drive, etc. The removable storage drive 614 reads from and/or writes to a removable storage unit 618 in a well known manner. Removable storage unit 618 represents a magnetic medium, optical disk, etc. As will be appreciated, the removable storage unit 618 includes a computer usable storage medium having stored therein computer software and/or data.

[0034] Secondary memory 610 can also include other similar means for allowing computer programs or input data to be loaded into computer system 600. Such means may include, for example, a removable storage unit 622 and an interface 620. Examples of such means also include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 622 and interfaces 620 which allow software and data to be transferred from the removable storage unit 622 to computer system 600.

[0035] Computer system 600 may also include a communications interface 624. Communications interface 624 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 624 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 624 are in the form of signals 628 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 624. These signals 628 are provided to communications interface 624 via a communications path (i.e., channel) 626. This channel 626 carries signals 628 into and out of computer system 600, and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels. In an embodiment of the invention, signals 628 can comprise image data (such as inputs 221 and 222) and depth information (such as depth information 232).

[0036] In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as removable storage drive 614, a hard disk installed in hard disk drive 612, and signals 628. These computer program products are means for providing software to computer system 600. The invention is directed in part to such computer program products.

[0037] Computer programs (also called computer control logic) are stored in main memory 608 and/or secondary memory 610. Computer programs may also be received via communications interface 624. Such computer programs, when executed, enable the computer system 600 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 604 to perform the features of the present invention. Accordingly, such computer programs represent controllers of the computer system 600.

V. Conclusion

[0038] While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in detail can be made therein without departing from the spirit and scope of the invention. Thus the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

1. A method of combining a plurality of input pixel streams to form a resultant pixel stream, comprising the steps of:

a) receiving the plurality of input pixel streams;
b) receiving depth information relating to the relative depth of the input pixel streams;
c) compositing, in depth order, corresponding pixels from each input pixel stream to form the resultant pixel stream in approximately real time.

2. The method of claim 1, wherein each pixel comprises color coordinates and an alpha value.

3. The method of claim 1, wherein step c) comprises the steps of:

i) blending the corresponding pixels according to the alpha values of the corresponding pixels and the depth information;
ii) outputting the resultant pixel; and
iii) if the input pixel streams contain additional corresponding pixels, repeating steps (i) and (ii) for the additional corresponding pixels.

4. The method of claim 1, wherein the input pixel streams and resultant pixel stream are video streams.

5. The method of claim 1, wherein the input pixel streams each represent renderings of adjacent sub-volumes, such that the resultant pixel stream represents a rendering of the adjacent sub-volumes viewed collectively.

6. The method of claim 1, wherein the depth information can vary for each frame.

7. An image combiner for combining a plurality of input pixel streams to form a single resultant pixel stream, comprising:

a depth determination module for converting depth information to an indication as to depth order of the input pixel streams; and
one or more blending modules that perform a blending operation on color coordinates of corresponding input pixels, on the basis of said depth order and alpha values associated with said corresponding input pixels.

8. The system of claim 7, wherein the input pixel streams comprise image data output from a graphics processor.

9. The system of claim 7, wherein the input pixel streams comprise image data output from another image combiner.

10. A computer program product comprising a computer usable medium having computer readable program code means embodied in said medium for causing a program to execute on a computer that combines a plurality of input pixel streams to form a single resultant pixel stream, said computer readable program code means comprising:

a first computer program code means for causing the computer to receive the plurality of input pixel streams;
a second computer program code means for causing the computer to receive depth information relating to the relative depth of the input pixel streams; and
a third computer program code means for causing the computer to composite, in depth order, corresponding pixels from each input pixel stream to form the resultant pixel stream in approximately real time.

11. The computer program product of claim 10, wherein each pixel comprises color coordinates and an alpha value.

12. The computer program product of claim 10, wherein said third computer program code means comprises:

i) computer program code means for combining the corresponding pixels according to the alpha values of the corresponding pixels and the depth information;
ii) computer program code means for outputting the resultant pixel; and
iii) computer program code means for repeating execution of code means (i) and (ii) for additional corresponding pixels, if the input pixel streams contain additional corresponding pixels.

13. The computer program product of claim 10, wherein the input pixel streams and resultant pixel stream are video streams.

14. The computer program product of claim 10, wherein the input pixel streams each represent renderings of adjacent sub-volumes, such that the resultant pixel stream represents a rendering of the adjacent sub-volumes viewed collectively.

15. The computer program product of claim 10, wherein the depth information can vary for each frame.

Patent History
Publication number: 20020130889
Type: Application
Filed: May 15, 2002
Publication Date: Sep 19, 2002
Inventors: David Blythe (San Carlos, CA), James L. Foran (San Jose, CA)
Application Number: 10145110
Classifications